"Digital Evidence Discrepancies - Casey Anthony Trial". http//
Excellent piece by Craig Wilson. This is what computer forensics is all about.
Views of John Bradley, creator of "the other tool".
I agree with JC. This is an excellent article and a must-read.
Both of these articles make interesting reading. It's a real eye-opener to see how the prosecution dealt (or rather, didn't deal) with this issue effectively.
Guardian Digital Forensics (who were the defense's consulting expert) shared their experiences on the topic as well
http//
Corey Harrell
"Journey into Incident Response"
http//journeyintoir.blogspot.com
I don't think either of the tool developers have any reason to be defensive here, and I don't see any need for finger pointing. If someone gave me what is essentially a beta release, or a release with new functionality, it's my responsibility to validate for my circumstances. I've seen lots of glitches over the years reported in major CF tools, but Guidance and AD are still in business, and the work still got done.
I hope this just becomes a teachable moment, and I thank the authors of both tools for providing additional information to help inform us on the lessons to be learned. The major lesson being validate your tools, compare your results, resolve discrepancies before testifying.
I've met both the forensic examiners in this case, and believe they are both very good at their work. It's never easy being on the bleeding edge of an issue trying to resolve technology limits whilst maintaining the quality of your work, especially under heavy scrutiny with a huge workload.
Part of the problem here was Mr.Bradley was called to testify about a report he had never seen. If you are handed something and asked "What does page 8 say?", as was the case here, you are in no position to say whether it is right or wrong, just to state what it says. I am sure that if BOTH reports had been presented to Mr.Bradley he would have stated that the results were contradictory and required further inquiry as to why the difference existed.
I think Patrick4n6 hit the nail on the head in his post, there should have been cross-validation of the results and any discrepancies investigated. That would squarely fall on the shoulders of whomever is conducting the examination.
Angus Marshall's piece
Casey Anthony Inspired Process
After reviewing some of the video from the trial, it became apparent to me that a way of pulling text out of images would be a useful process for keyword searches.
This is an example of a script I am working on to extract text from images using the program tesseract. Once the text is exported from the images, it can be indexed and exported text can be used as a reference to help find keywords. This tool does have an error rate and alone it would not be ideal for as a valid process, but it used in conjunction with other tools can help locate images with valid keywords.
In the video example I use one of the images questioned in Casey Anthony Trial.
http//
The major lesson being validate your tools, compare your results, resolve discrepancies before testifying.
Patrick4n6 said it best. Every tool will produce different results based upon how the tool was written and how it interprets the data. There are always false positives that must be sorted through. Having a good Hex editer is always a plus for validation by looking up the offsets reported in other tools.
I write forensic software and will admit every tool will find different results based upon the methods the creators use. It doesnt mean they are flawed or defective just the method used must be validated by comparing the results to another tool.