I think the correct answer is to say that you did not find evidence to support the contention that X opened folder A. That doesn't mean it didn't happen, but you don't have evidence of it. Absence of evidence is not necessarily evidence of absence.
This is right, as I said, it is only one option. You never deny something didn't happen, you just say that you can't find evidence to prove it did. Also, if you have doubts, as a forensic investigator, you can always refuse the task. It's better giving back the task then having bad conscience because of your unsure answer.
The so named "gray zone" with unsure answers is the debate for lawyers.
Quick how many fingers are these? 😯
Choose one
1) I don't know, I cannot see your hand, NOT ENOUGH INFORMATION.
2) I presume one or more, however no more than five, however, since you used the plural "fingers" I would expect two or more and still no more than five.
3) My guess is three.
jaclaz
pǝɥɔʇǝɹʇs sɹǝƃuıɟ 9 ƃuıploɥ ɯɐ puɐ lıʇɔɐpʎlod ɐ ɯɐ ı ʍʇq puɐ ʇɔǝɹɹoɔ sı ⇂#
ɹǝʍsuɐ
So, Im looking at developing a scale of depicting how much confidence a practitioner has in their findings to support jury decision making. Such scale are commonly in used in other forensic disciplines like fibre and footwear marks.
While the scale may be useful (fixing terminology is always useful the ISO OSI model has done a lot of good just by defining the terms to use), the metrics must also be in place the results must be possible to replicate. If one FA says 'conclusive' and another says 'persuasive' there's something wrong.
Add to that the behaviour noted by incompetent FA's, for example the examples provided in the chapter on arson (Case Study Cameron Todd Willingham) in 'Forensic Science Reform' or that of fingerprint evidence (Case Study Brandon Mayfield), in which one or more forensic analysts end up stating 'conclusive' evidence for what was lack of knowledge in one case, and faulty processing in the other.
1. Conclusive Fact- The current set of data on a device, following testing and validation cannot be interpreted any other way than that which is presented.
Considering that a majority of Intel computer systems have had AMT systems (remote out-of-band management of personal computers) for many years, potentially allowing something very like backdoor access through a separate path, there is always at least one alternatively interpretation that someone else did it. The technical possibility is there, it's a question of 'how can we tell if AMT was used or not? Can we exclude that it was used?'
2. Compelling- Digital data is as a result of a known and validated process initiated by known actions. (Example, internet history found in a browsers typical log file)
Don't understand what you're saying. The first sentence reads is if there are some parts missing.
3. Persuasive- Information deviates from standard formats but can be logically tested, verified and explained. (Example, a carved Internet History record)
And again, not certainly. On a system, running virtual machines, some of which have been deleted, you can't necessarily say if a carved history record comes from the main system or from one of the virtual systems. This very probably affects interpretation.
4. Feasible- Digital data is capable of explaining a suggested hypothesis but 1 or more core requisites are missing in order for a scenario to be fully validated with available digital data.
Disagree. A core requisite must be present if it isn't there, it's not a hit. Minor factors could possibly be absent, without affecting the core interpretation, but if a core requisite is absent, and it leads to 'feasible' (a weak *positive* result) it may become ground for false convictions. (And those core requisites must be on *very* strong scientific grounds. Shaken Baby Syndrome had three core requisites … but they were not based on good science. True, a little beside the point in this case, as SBS did not go so far as to declare 'feasible SBS' if one of the triad elements was not present – but if it had, the damage in false convictions would have been appalling.)
Drop this one entirely, I think.
5. Implausible- Digital data is unlikely to be as a result of the proposed hypothesis. Core requisites are missing in order to rely upon the understanding offered. (For example, suspect says A happened, but for A to have happened the digital data needs to show B and C. Neither are present.)
From here on, I see no real value in the scale. Why grade negative results?
Missing entirely is grading based on absent forensic research. Imagine that there is no research at all, yet someone has create a process and claims that provide input X, Y, Z, perform process p1, p2, p3, and output will provide The Answer. A bit like microscopic hair comparison as foundation for identifying people.
Those who bought into the idea, won't question it. Hopefully, someone will say 'junk science' — but then, they did that over SBS, and very few listened.
A Junk Science forensic analyst will provide appropriate confidence levels to his results … but as he can't be trusted in the first place, what use are they?
Hopefully this is an unmentioned prerequisite for the use of the scale of confidence.
1. Conclusive Fact- The current set of data on a device, following testing and validation cannot be interpreted any other way than that which is presented.
The fact of your finding may not be 'conclusive' neither might it support the use of compression of the word 'fact' by the use of an adjective; your qualifying state is only based on 'current' understanding and not necessarily of things to come or (new) discoveries to be made.
Interesting topic and quite a relevant discussion. The question I have that no one has directly raised is "why is this approach to developing a confidence level for DF based on other disciplines?"
I find this to be common to every proposed theoretical pursuit in DF. To begin such research on the assumption that the way other disciplines define, measure, and communicate confidence can be useful may not even appropriate. Is the idea of confidence levels even meaningful in the context of information technology, much less digital forensics? Also, is the work only sound if there are stats and metrics to go with it?
May seem benign, but if any foundation is to be assumed plausible, shouldn't we be able to show that is first, before basing/testing any kind of models or practices on it?
Just thinking out loud…
Interesting topic and quite a relevant discussion. The question I have that no one has directly raised is "why is this approach to developing a confidence level for DF based on other disciplines?"
The reason for this, even though it is highly flawed, is this continued insistence that Digital Investigations must act like a forensic science discipline otherwise the world would implode.
I have heard several (non-digital) forensic practitioners amazed that we do no statistical analysis or other 'basic' scientific theory. The reasons, that they cannot grasp, is that we do not deal in statistics in that way.
some really interesting points. I think if we dont have some form of scale then we end up saying something is 'true' or 'uncertain'. Surely there are lots more shades in between which if we could define in some way, this would be helpful to those relying on our interpretation of digital data?
I think comparing to other FS disciplines is somewhere to start but thats it, we need to make our own framework
A scale of credibility/reliability/veracity doesn't really work in a courtroom, although it does apply to military/intelligence assessments. Courts, at least in the USA, rely upon a scale, but it is not numerical nor do I believe it ever could be. It is subjective, based on reasonableness, not absolutes. Even a criminal conviction is not an absolute of 100% guilt proven, but 'beyond a reasonable doubt'.
With a DF exam, insofar as finding evidence, the scale is simple 0% (evidence cannot be found) or 100% (yes, evidence was recovered).
As far as an expert opinion, it is still an opinion based on the evidence recovered and training/education/experience, which is still not an absolute but can affect a judgment that can affect if judge/jury believes the case merits 'beyond a reasonable doubt'.
I agree, in terms of finding evidence then yes the scale is 0 or 100%. But in terms of interpretation, I dont think the same applies. There is definitely degrees of confidence.
What about this scenario.
deleted X URL is found on Suspects computer. Question, did suspect visit X?
Can you answer that with yes or no? I would argue not. How confident are you that they did? what about if it was found as the result of another site cached where the URL is in the code? Someone else might have visited it? Access to the device? What if it was a pop-up? What if it was planted there?
Obviously im playing a bit here, but do you get where I am coming from?
It will still always be an opinion, subjective, based on the expert's personal experience, education, and training. Another expert may have a completely different opinion because of differing experience/training/education.