Its like nuclear war, any moron can press the button.
but after that no one is really sure exactly what goes on or how you end up at the concluding event.
Would this button also let you know if you had retrived all the evidence or would there be a Chaos factor with a % indicator showing the probabalistic chance of what you retrieve being accurate and related to the case at hand?
This is a very interesting discussion, and without giving my age away, I remember similiar discussions when we began using products like EnCase and FTK and how they would "dumb down" the practice of digital forensics. The reality is that these dedicated forensic applications have become useful tools, and there is no doubt that these "point and click" forensic applications will still be useful tools for us.
However, ther real test comes on the witness stand, and the last time I checked, there was no forensic tool that replaced this requirement. In fact this will probably lead to opportunities for digital forensic examiners that do defence work, who should have a field day attacking methodologies and competencies of users of "point and click" products without the requisite digital forensic skills and knowledge.
When one looks at the report of the National Academy of Science in the United States with regards the state of forensic science practice in general (which will include digital forensics), we need to consider whether or not the use of any tool, without the necessary skill and knowledge will ever stand the test of time in the courts going forward.
How relevant is "Daubert Test" to this discussion? I do not see much mentioned about it today.
I would like to give some views from my side as a developer. For many tasks it is far easier to write a program once, than to try and train/educate users how to fill in certain parameters. In my world of data recovery there are thousands of ways the data may have been corrupted, lost or damaged and so any one button approach obviously has limitations.
Somehow a user needs to know what the program will detect, and what it may miss. This is where the experience of the examiner is still required, though for most applications I think aiming for a one button approach has a lot to recommend for it. It just wants to be a very well designed button. The button also needs some kind of feedback to learn from it's mistakes. Ultimately one would hope that the feedback from examiner 'A' will help examiner 'B' with extra formal learning. It will also try and ensure that examiner 'C' does not make a silly mistake and miss the obvious.
All general purpose solutions have limitations, but as mentioned in an earlier post, they can cover areas such as DOS that the examiner has never seen. (He didn't mention CP/M! and the 500 variations of floppy disks that came with)
On a different tack, we all trust a 'one button' approach to get our plane on the runway in foggy conditions when the pilot cannot see.
…
On a different tack, we all trust a 'one button' approach to get our plane on the runway in foggy conditions when the pilot cannot see.
But all pilots are initially trained 'on the stick' and know if the autopilot is not reacting appropriately. You have to know what your tools are supposed to do and respond accordingly.
I have seen FTK and Encase give significantly different outputs from large dbx files (>1.3 gb). If you just push a button and accept the output without evaluating whether it was expected or reasonable you are flying blind (to continue with the pilot analogy).
…
On a different tack, we all trust a 'one button' approach to get our plane on the runway in foggy conditions when the pilot cannot see.But all pilots are initially trained 'on the stick' and know if the autopilot is not reacting appropriately. You have to know what your tools are supposed to do and respond accordingly.
To further the point, if the autopilot was always better, why bother with pilots at all?
When Westinghouse built the control mechanism for San Francisco's BART they argued that it was so foolproof that you'd never need operators for the trains. There were three computers (similar to planes) to make decisions, if any two disagreed the third would either make the call or stop the train. In trial runs the trains kept crashing.
After analyzing the problem they concluded that since all three computers were using the same basic algorithm, there was no possibility that they'd ever disagree.
Anyone remember January 15, 1990 when 5 million callers lost service for up to nine hours because ATT neglected to put a few lines of code in its signalling software? This was running on a production network!
As Ronald Reagan used to say (although he didn't coin the phrase) "Trust but verify!"
I believe that like any other complex process digital forensics will evolve over time. I think the point is not to be hostile to PBF tools per se but the attitude of the examiner behind it. If you anyone thinks that any function in business can be completely automated to the point of removing human oversight then they are deluded. PBF allows for exactly as David suggested, a method to improve the manpower and resources of culling through TONS of hay to find the needles - there is nothing wrong with taking a magnet to the piles of hay to improve efficiency. In the end a human still has to think about the findings and report on the results. As a practitioner if I can get to that stage faster with less headaches I'll take it.
How relevant is "Daubert Test" to this discussion? I do not see much mentioned about it today.
Daubert and the resulting Federal Rules of Evidence Rule 702
Rule 702. Testimony by Experts
If scientific, technical, or other specialized knowledge will assist the trier of fact to understand the evidence or to determine a fact in issue, a witness qualified as an expert by knowledge, skill, experience, training, or education, may testify thereto in the form of an opinion or otherwise, if (1) the testimony is based upon sufficient facts or data, (2) the testimony is the product of reliable principles and methods, and (3) the witness has applied the principles and methods reliably to the facts of the case.
As well as Frye
To meet the Frye standard, scientific evidence presented to the court must be interpreted by the court as "generally accepted" by a meaningful segment of the associated scientific community. This applies to procedures, principles or techniques that may be presented in the proceedings of a court case.
This does bring up an interesting point about the veracity of what is generally accepted as procedures for finding and reporting on electronic evidence by "qualified" individuals. Much of what we do is not well know or generally accepted. Everyday on this forum there are debates among the senior members of the field on best approaches an practices. Ultimately it boils down to the competency of the examiner - the person not the tool has to take the stand. Each case is different and will require different tools right? So why would each case also require different skill levels of the people using the tools? Not everyone has to be a master of the field but if the are able to think analytically and report on what they find accurately and clearly - who cares that they might have used an "off the self" solution and can take the stand to explain the process.
>>snip
Ultimately it boils down to the competency of the examiner - the person not the tool has to take the stand. Each case is different and will require different tools right? So why would each case also require different skill levels of the people using the tools? Not everyone has to be a master of the field but if the are able to think analytically and report on what they find accurately and clearly - who cares that they might have used an "off the self" solution and can take the stand to explain the process.
I have been in LE a long time and I always tell the 'newbs' that you don't earn your salary until the minute you are sworn in to testify.
RB
This does bring up an interesting point about the veracity of what is generally accepted as procedures for finding and reporting on electronic evidence by "qualified" individuals. Much of what we do is not well know or generally accepted. Everyday on this forum there are debates among the senior members of the field on best approaches an practices.
Interestingly a similar point was raised in an unsuccessful Daubert challenge to my testimony in a Federal court proceeding. Opposing counsel tried to use comments and questions posted on this and other forums to challenge my admissibility as an expert, in particular, the "Has anybody ever see this…?" kind of question.
My response was to the effect that there is no one out there who has seen it all and that advancing knowledge requires sharing of experiences and observations. I noted that asking questions on a forum such as this is no different than a curbside consult.
In supporting my position the judge ruled that participation in fora such as these, even when the identities of all of the participants are not known, is a legitimate form of professional development as long as the expert's opinion is based upon the evidence and not the result of a posting on a forum.