Verification/Valida...
 
Notifications
Clear all

Verification/Validation - What is the Expectation?

5 Posts
3 Users
0 Reactions
1,938 Views
(@ashishsingh)
Eminent Member
Joined: 11 years ago
Posts: 29
 

Hi,

Yes the procedure of verification and validation of results is very much required. There are many automated forensics equipments are available that play crucial role in verification and validation process.

The best mechanism is the usage of function mapping.

• Validation can be done by usage of the function mapping.
• Verification of course is done on the validated results.

Regards


   
Quote
(@thefuf)
Reputable Member
Joined: 17 years ago
Posts: 262
 

You conduct and complete your analysis using whatever your favorite tools. ……

Here is the question, is there an expectation to validate or verify these results? And if there is how? Does this mean you begin the investigation again using your second favorite tool and complete it from start to finish, or something else?

Computer forensic examiners tend to trust the tools they use. This means that only few examiners do any kind of verification/validation of tools in each case, i.e. many examiners validate their tools once and then use them as trusted without further verification/validation (unless some specific doubts arise), and some examiners never do any kind of verification/validation (what a shame).

Also, there is no practical way to verify/validate every aspect of "under the hood" operations of tools you use. You can do the dual-tool verification to ensure that Windows registry parser you used in a case gives results similar to another parser's results, but this doesn't verify that your file system driver worked properly (read the contents of a Windows registry file correctly) or that your operating system detected all partitions existing on a disk and didn't miss anything (e.g. didn't miss another Windows installation), etc. If you are going to verify/validate every piece of code being executed on examiner's computer when dealing with a case, prepare to spend years on that case. You can't afford this, so you have to distinguish verification/validation of tools from their usage in a specific case, and do verification/validation of tools only when in doubt (you have encountered a new tool; results you obtained using a tool look strange; and so on).

As of tool verification/validation methods, many examiners do the "black box" testing based on sample data and the dual-tool testing (run two similar tools, compare the results). Both methods have disadvantages (e.g. many metadata extraction tools share the same code resulting in wrong translation of timestamps in MPEG-4 files to a human-readable form, because ISO released a sample code with a wrong timestamp translation constant; and "black box" tests simply can't cover all input data states possible in practice), but you can always take a HEX editor and do some data parsing with your brains (however, this is not the solution when dealing with evidentiary data modification issues, e.g. when you use a not forensically sound live distribution to acquire the evidence without a hardware write blocker, because you can't recover overwritten data with a HEX editor, i. e. you have to distinguish between source data interpretation errors, which can be mitigated by interpreting the same data again using another tool, and source data alteration errors, which can't be mitigated after the fact).


   
ReplyQuote
TuckerHST
(@tuckerhst)
Estimable Member
Joined: 16 years ago
Posts: 175
 

If you are going to verify/validate every piece of code being executed on examiner's computer when dealing with a case, prepare to spend years on that case.

Amen. It is neither practical nor important to validate everything on every case.

Usually a case hinges on a small number of artifacts, documents, etc. To those linchpin items of interest, I will typically give added scrutiny, including validating those specific results with a different tool, and/or checking them manually.


   
ReplyQuote
TuckerHST
(@tuckerhst)
Estimable Member
Joined: 16 years ago
Posts: 175
 

When you do validate with a specific tool, what is it that you are checking? In other words, if Tool 1 found "SomethingImportant.txt" at sector 545, are you using a hexeditor, going to sector 545 and looking to see if "SomethingImportant.txt" is there?

It depends on the artifact, but yes, sure, manually checking in hex might be applicable. One might also export the artifact or file from the case and use a specialized tool, including open source tools (e.g., regripper), to see if they give similar results. If not, then more research may be required in order to achieve results that are explainable and reliable.

Incidentally, validation is especially important when the tool itself is making algorithm-based decisions that involve judgment, such as recovered deleted data. For instance, if in a SQLite DB, data is recovered from a freelist leaf page, can that data really be said to have been "deleted?" In a spoliation case, that can be extremely important.

Just because the software calls it "deleted" does not make it so. Here, validation might also involve comparing the "recovered" rows to active rows. In my experience, autovacuum makes it likely that in a well-used database, some recovered data will match active data. In other words, you are likely to see "false-positives" for deletion. Before drawing the inference that recovered data means it was deleted, make sure there's not a more likely explanation (e.g., SQLite page optimization).

This is also true when reporting raw aggregate numbers after processing a drive image with a forensic tool. The carving process is likely to produce a large number of false positives. Naively reporting "n # of files" in various categories may well be misleading. For example, what if a significant proportion of files in a category (e.g., shortcuts) were copies, cryptographic duplicates, created as a result of some automated process such as Win XP restore points? It would be misleading to report "n # of files" without some kind of qualification as to what that means.

To quote Inman/Rudin
"Results of an evidence examination presented without a statement about the strength of the relationship of the evidence to the putative source is an abdication of the responsibilities of the competent examiner. No one is better able to comment on the strength than the forensic scientist." – Principles and Practices of Criminalistics, p. 141

It's best to remember that the expert is testifying, not the tool. Do whatever you have to do in order to ensure your testimony is truthful and not misleading. Almost all digital evidence rests on inferences drawn about the data. Be careful that the inferences are valid.


   
ReplyQuote
(@thefuf)
Reputable Member
Joined: 17 years ago
Posts: 262
 

Also, when it comes to certification (CFCE, CCE, and any other practical testing) are they expecting you to do a validation? In other words, while in actuality it is impractical to validate in the real word, the superior method (competency expectation) would in fact be to validate?

I guess this highly depends on your jurisdiction, and there is no universal answer. Computer forensic certification organizations can't override rules originating from applicable laws. In some countries you are only allowed to use tools and methods approved by a corresponding government agency (and no validation on your side is required), in other countries you may use any method and any tool you want if you are able to prove its validity when questions arise (validation on your side may be required), and so on (e.g. UK FSR proposed the following draft regulation The Regulator already requires that validation is performed before a method is used in live casework, and that by October 2015, the validation of imaging of conventional hard drives is in the format required in the Codes, see the full draft in PDF).

When dealing with validation of data interpretation processes inside a tool, one can only do some basic tests without referring to a specific data set (I'm talking about "black box" approaches). One can demonstrate that a tool displays the contents of a file system correctly for a given HDD image, but this doesn't guarantee that the same tool will produce valid results under different conditions (when dealing with a file system left in another state, e.g. not properly unmounted). Conversely, you can do a deeper validation using only one piece of input data (e.g. a particular evidentiary medium or a specific file) – because your task here is to prove that a tool did everything correct under given conditions only (e.g. translated a timestamp to a human-readable form correctly, and you don't need to prove that a tool does this correctly for any other timestamp/timezone possible). And the good thing is that nobody is losing digital evidence due to errors in data interpretation processes (when in doubt, a court can order you or another person to re-examine a particular piece of evidence, like a file or a file system metadata block, using another tool).

When dealing with validation of possible data alteration issues inside a tool, you are in trouble. If you don't use a hardware write blocker for some reason (there are many valid reasons for this), a tool can alter the evidence, and no further re-examination will help to recover an original piece of data. Many live forensic Linux distributions automatically alter the data stored on attached drives, and the word "automatically" points us to the fact that evidentiary data modifications don't depend on examiner's mistakes, but on a tool itself. Even worse, some alterations of evidence can't be noticed after the fact, by looking at the data itself (e.g. PALADIN EDGE 5.02 will automatically sync two RAID1-like LVM disks when they are out of sync, but you will never notice that after the fact, because there are no indicators for this process somewhere in LVM metadata). And even NIST testers fail to notice obvious data alteration issues when validating live distributions, so don't treat validation as something absolute.


   
ReplyQuote
Share: