I'm not 100% sure people are aware of what checksum64 is.
….
My understanding of checksum64 is to count each byte…at the end it should be 0, if it's not then your drive isn't wiped. The reason you use checksum64 is due to the size of todays drives.
Hmmm, no, not really, strictly speaking.
Checksum 64 (like all plain "sums") is subject to be NOT a "guarantee" of the values involved.
My bad ( , I was thinking of the Sum 8/16/32/64 algorithm, NOT of Checksum 8/16/32/64 as described here
http//
i.e. if you use sum 8
0x01
with
0xFF
you get 0x00
With sum 16 0x0001+0xFFFF is 0x0000, with sum 32 0x00000001+0xFFFFFFFF is 0x00000000, with sum 64 0x0000000000000001+0xFFFFFFFFFFFFFFFF is 0x0000000000000000.
You need a more sophisticated checksum algorithm to state that all added bytes are 0, such as the mentioned ones
CRC32 (ITU-T V.42), MD5, SHA1 and SHA256
If anyone uses any "plain" sum as a replacement for any of them, it is conceptually (setting aside its practical usefulness, i.e. speed) "wrong".
Having a sum result of 0 is not actually even a "collision" (like it is possible to calculate for CRC32 and MD5 and at least theoretically within the capabilities of modern computers for SHA1, but not - yet - for SHA256), it is simply something that may happen.
Widening the bit sizes greatly decreases the probabilities, but nothing more than that.
Overall, the whole thing may be simplified to the equation
sum_up_to_last_item+last_item=0
Probabilities are then as follows (probability that last item will have the value that can satisfy the equation or viceversa that the sum has a value that summed to last item value satisfy the equation)
Sum 8 1/(0xFF+1)=1/2^8=1/256
Sum 16 1/(0xFFFF+1)=1/2^16=1/32256
Sum 32 1/(0xFFFFFFFF+1)=1/2^32=1/4294967296
Sum 64 1/(0xFFFFFFFFFFFFFFFF+1)=1/2^64=1/18446744073709551616
Now, 1/18446744073709551616 is a really teeny-tiny probability, still it differs from "certainty".
Though of course to all practical purposes it is OK, the distinction should be IMHO made.
jaclaz
I'm not sure what the disagreement is about if you're adding individual bytes together. I can't remember why they changed the recommendation to 64, something about bigger drives, but really as soon as you hit anything > 0 in the sum you end the test and report. And as long as you're taking one byte at a time (which is slower im sure) and adding it to a 2byte register then you're all set.
Sure….if you add 0x01+0xFF you'll get 0x00 if the result is held in 1byte because it'll drop the 0x100. But not with 64bit's to store your result.
0xFF + 0xFF = 0x1FE and then you're algorithm can exit.
Thinking through it a bit more I think the move to checksum64 was because of drive sizes and storing the sum of all FF's on the drive ; otherwise you end up with the condition you mentioned. That being said, a visual inspection will show that result pretty quickly.
MD5/SHA-n will not tell you that all bytes are 0…it'll give you a hash of the drive sure, but you would have to know the all-zero condition for a drive first.
MD5/SHA-n will not tell you that all bytes are 0…it'll give you a hash of the drive sure, but you would have to know the all-zero condition for a drive first.
Sure, hence the already mentioned calculator
https://www.forensicfocus.com/Forums/viewtopic/p=6591998/#6591998
As said, widening the size of the sum register lessen the probability of a "collision", still, the sum 64 of a file containing00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
is 0
but also the sum 64 of a file containingFF FF FF FF FF FF FF FF 01 00 00 00 00 00 00 00
is still 0.
And also that of files
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00FE FF FF FF FF FF FF FF 01 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00
will still be 0.
Now, if you make an "evolved" sum 64 algorithm with a condition such as that the calculated sum must be at all times 0, that would work (but is not anymore a simple sum 64).
jaclaz
Sure, but this operates under the mentality that "CRC32 (ITU-T V.42), MD5, SHA1 and SHA256" are appropriate for this type of verification, which I don't believe they are
Ok. Lets take your exampleFF FF FF FF FF FF FF FF 01 00 00 00 00 00 00 00
$sum variable size is 8bytes 0x0000000000000000, and we'll do this the slow way, taking a byte at a time.
$sum = 0x00000000000007F9
Take the other oneFE FF FF FF FF FF FF FF 01 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00
$sum = 0x00000000000007F9
Both of them aren't 0.
Just because the algorithm stores the result in 8 bytes doesnt mean that it takes 8 bytes at a time, that defeats the purpose.
If my math is correct ((2^ 64) 8 /1024 /1024 /1024 /1024 /1024 /1024) then this should store a value up to 2 petabytes. When we have drives > 2 petabytes then we'll have to change to checksum128 (or…just modify the algorithm to take x bytes at a time and compare them to zero until the condition fails…which is faster for non-wiped drives).
Re Verifiying a HDD is blank.
under Linux ( or Windows using Babun/cygwin … etc)
One method is to compare the device to be checked with the zero device using the cmp command.
(may need an elevated terminal)
If the device to check is blank (full of zero's) then you will get an EOF (i.e. no matches on non zero bytes)
e.g. for a wiped drive.root@grml/tmp# cmp /dev/sdb /dev/zero
cmp EOF on /dev/sdb
root@grml/tmp#
If not blank
If the device is not blank then you will be get an error message with an offset to where this occurs.
e.g.root@eeedebian/tmp# cmp /dev/sdb /dev/zero
/dev/sdb /dev/zero differ byte 513, line 1
root@eeedebian/tmp#
Shred/wipe & verify with one command
I often shred and compare/verify with the same command.
e.g.shred -z -n 0 /dev/sdb ; cmp /dev/sdb /dev/zero
RE checksum64
checksum64 is a daft veriifcation method. Apologies but I really dislike the inefficiency of checksum64.
With cmp it stops as soon as it finds a difference where with checksum64 you have to wait until it has read the whole device first.
Let alone who has a copy of checksum64 ?
Lets say you forgot to wipe a 8TB HDD but to be sure decided to verify the drive is blank.
With checksum64 you would be waiting hours and hours for an answer where with cmp you would know in seconds that you needed to wipe the disk AND exactly where the beginning of the offending data is.
With checksum64 you would then have to search over the disk again to find the beginning of the offending data ( more inefficiency grr evil ).
I really don't know why IACIS would be recommending such a poor tool as someone earlier mentioned.
@randomaccess
Sorry I didn't see your reply.
Sure, but this operates under the mentality that "CRC32 (ITU-T V.42), MD5, SHA1 and SHA256" are appropriate for this type of verification, which I don't believe they are
Which is OK, since I believe they are, thus creating a nice symmetrical situation.
Ok. Lets take your example
FF FF FF FF FF FF FF FF 01 00 00 00 00 00 00 00
$sum variable size is 8bytes 0x0000000000000000, and we'll do this the slow way, taking a byte at a time.
$sum = 0x00000000000007F9Take the other one
FE FF FF FF FF FF FF FF 01 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00
$sum = 0x00000000000007F9Both of them aren't 0.
Just because the algorithm stores the result in 8 bytes doesnt mean that it takes 8 bytes at a time, that defeats the purpose.
Then maybe we are talking of a different algorithm. 😯
The "smaller" Sum8, Sum16 and Sum32 work exactly as I described, summing "tokens" that are respectively 1, 2 or 4 bytes and accumulating the result in a same sized value, so Sum64 should work the same.
Any link to the algorithm (and or the program) that you are referring to?
OK, I found it, you were referencing Checksum 8/16/32/64 as described here
http//
I was talking of Sum 8/16/32/64, my bad ( , sorry for the confusion.
@hydrocloticacid
Yes ) that is seemingly a more valid approach, on average faster (I am excluding the malicious use of writing a 01 to the last byte of last sector of the device wink ).