Researcher Nina Sunde on Reducing Bias in Digital Forensic Analysis

Christa: No professional in any discipline is free from bias, but how it affects decision making, especially decisions around a criminal suspect’s liberty is the topic of several years worth of research by Nina Sunde. Nina is a PhD fellow at the University of Oslo’s Department of Criminology and Sociology of Law and a lecturer at the Norwegian Police University College. Nina joins us this week on the Forensic Focus Podcast to talk in depth about her work. I’m your podcast host Christa Miller. Welcome Nina.

Nina: Thank you.

Christa: All right, so we’ll start with: what was it that first sparked your interest in cognitive bias and its implications for digital forensics?

Nina: Well, I would say I got interested in cognitive bias many years ago through learning about the role that forensic evidence played in wrongful convictions. Partly from the Innocence Project, probably, but also from cases from my own country where guilt biases seem to have caused one-sided investigations and erroneous verdicts.


Get The Latest DFIR News

Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month.


Unsubscribe any time. We respect your privacy - read our privacy policy.

So this sparked my interest for the mechanics and sources that cause bias. And also reading studies about bias decision making in forensic disciplines such as fingerprinting and DNA. I was surprised to learn how contextual information influenced the results at even source level assessments, such as comparing a latent fingerprint to a fingerprint from a suspect.

And this actually sparked my interest to find out more about how this could impact digital forensic work. And I couldn’t find any research actually targeting bias in digital forensic work. And I realized that there’s a research gap here, and decided to start exploring it in my PhD project.

Christa: OK. So from that point, how has your interest evolved and been informed by current events either in your work or in the world or both?

Nina: Yeah… from my own experience was that there seemed to be little awareness within law enforcement about the impact of context on decision making. And I’ve never heard anyone talk about or warn about bias in digital forensic work.

So instead I heard people talk about objective and reliable digital evidence and facts speaking for themselves, and I knew that there were many bias sources in play from my own experience with the investigative and forensic work I did through close collaboration and through discussions for agree and assignment, and also many situational factors such as time pressure and clearance pressure, and even exposure to emotionally disturbing content such as child abuse material, which could amplify bias.

So knowing from one experience, there’s a magnitude of decisions that needs to be made during the digital forensic process and taking into account all these biasing sources I considered the risk of our are quite high and wanted to know more about this in terms of the digital forensic work.

Christa: OK, that’s really interesting. So I mean, from there we’ve seen a pathway in your work forming across peer review and hypothesis development and bias mitigation among other topics. What is this body of work leading toward, and what do you hope to see implemented on a practical level?

Nina: Oh, that’s a lot to say about that one, Christa! If I was listening to the impatient part of myself, I would like to jump straight to the practical level, so to speak. However, I think a successful implementation at the practical level depends heavily on some changes concerning how we think about the digital forensic process and the evidence itself.

So from my point of view, there are at least three important issues that needs to be handled to ensure success at the practical level. And that is how we think about interpretation. And the second one is how we think about bias blind spots. And the third is how we think about error, I think.

So if I can elaborate a bit about those issues, maybe…I think for the first issue concerning interpretation, I’m hoping to see a digital forensic process highlighting the practitioner as most important analytical instrument.

And I would argue that the role of the digital forensic practitioner has been put in the background, often to someone that finds and presents the facts and the scientific method and research based practices are important, but it’s important not to adopt a very conservative, positivistic view of the practitioner that has yet been abandoned by the scientific community.

So, I think objectivity seems still to be understood as free from interpretation. And I would say that it’s a dangerous fallacy. I think actually it’s vital to acknowledge that the interpretation happens, and that is not an ugly thing. I strongly believe that the expertise of the digital forensic practitioner is necessary to obtain evidence and interpret what they mean for the case under investigation.

And I think that by refraining from conveying your interpretation, there’s a risk on the one hand that the decision makers don’t understand the relevance of the finding, and on the other (due to the lack of expertise) there’s a risk that they misinterpret the information in terms of what defining means, and the value of the question that matter.

So, I think interpretation is a very important issue here. And I would say that in terms of the bias blind spot, I often ask my students about their attitudes towards bias and whether they believe that bias might influence their own decision making, and still a high proportion actually replies that they don’t think it’s a problem.

And so we still need to lecture digital forensic practitioners about biasing mechanisms and implications for the digital forensic decision making. However, I think the problem is not solved by lecturing alone. I think it’s really important to go practical here, because having a little knowledge about bias can make you overconfident.

So you need to…and strengthening your belief on…that you can mitigate biased by pure willpower, for instance. So you need to go practical here and probably run practical investigation case solving to experience the bias yourself. Then I think you really know (or believe and understand) that we need the bias mitigation measures.

So, and concerning the third issue, I hope to see a shift towards the attitude around error. I think error is somewhat unavoidable in any process involving human decision making. And instead of focusing on who to blame, I hope that we can view error as something we can use that is useful to us.

I think the very few occasions where an error is revealed, it’s an opportunity for the organization to learn. So we can of course minimize the consequences by detecting the error, but we can also learn and we can correct systematic errors.

So instead of looking forward to blame, I think we should have a shift towards seeing it as an utility, actually, for the organization. Easier said than done, probably! But I think this is a way of looking about error that could be more beneficial to…also to digital forensics.

So these points are, I would say, some prerequisites for successful implementation of bias mitigation measures. So back to your original question: at the practical level, I hope to see subjects involving hypothesis driven investigations and also human and cognitive factors on the curricula for digital forensic education and training.

And also, I hope to see peer review and verification as measures, not only performed in a lab environment. And regardless of where it’s performed, I hope to see it designed in a way that prevents bias also for the peer reviewer. That’s what I would want to aim for.

Christa: So to that end, your research contributed the first documented peer review methodology in digital forensics. Is it being used that you know of?

Nina: There has been a pilot in the Norwegian police adapting the fourth level, which is what we refer to as a conceptual peer review involving reports. And I hope to run the pilot in more police districts next year, actually, and hopefully also to involve law enforcement organizations from other countries. So, the planning stage right now.

Christa: Very good. I imagine that you’ll be writing it up for additional published work.

Nina: Yeah, definitely. It’s going to be evaluated from a research point of view and documented and learning from the pilots as well. So I guess it’s going to be a journal article of it when we have conducted the research.

Christa: OK. Look forward to reading it. Your research, I noticed, also consists of a lot of first, not just that methodology, but others as well. Is this concerning? Should things like peer review and interpretation and error…they have been considered, but should things have been considered on a practical level before, or does this research reflect the maturity of digital forensic science?

Nina: Well, everything becomes clear in hindsight, right? So it’s easy to say that we should have known what we know now before, but I think the research in the digital forensic domain has been very much affected by the rapidly changing technology environment. So challenges such as increased volume of data and complexity and the novel technologies and user patterns has applied most of their attention for the academics in the field. And I think the maturation has been slowed down due to the challenges the discipline has dealt with over the last two or three decades on the effectivity level. So I think it’s maturing towards adulthood, but it’s slowly, I think. And we’re on the way.

Christa: OK. What are some metrics that might help to indicate forward motion and effectiveness in implementation of these new methodologies? Larger samples to study or other metrics?

Nina: Yes, we need larger samples. We cannot study digital forensic practice without involving the practitioners themselves. So, my sample was 65 who agreed, and 53 that com completed the experiment. And I learned that recruiting participants was really demanding in my research.

So…but it’s unfortunately not unique to my research, this black box study by Barbara Guttman and colleagues included, I think, 102 participants completed one of the two examination tasks. So yeah, I think both mine and then this study include quite a small sample concerning the relative number of digital forensic examiners around the world.

But the baseline here is that we need more practice oriented research. So the samples we need are defined by the objectives of the research. We sometimes need large samples to achieve statistical power in quantitative or experimental studies, but we can also have smaller samples in qualitative studies looking into practices from other perspectives then to generate numbers and statistics, for example, case studies and ethnographic studies and so on. So, it’s not only about recruiting many, but I think, to do more practice oriented research.

Christa: Could your insights (I’m gonna switch gears a little bit) about practitioner processes be useful, again on a practical level, in qualifying expert witnesses. For example, hypothesis formation and testing, or other evidence reliability safeguards?

Nina: Our research indicates that there is a gap between the best practice guidelines and what they recommend and what is actually performed. And it also indicates that insufficient implementation of recommended practices, or a partial implementation, so to speak, is not effective.

So, for example, in the experimental large proportion claim to have used a hypothesis driven approach to maintain examiner objectivity during the analysis. However, the vast majority applied the approach as a mental activity, as opposed to a structured and documented approach. And since the results indicated bias, a plausible explanation is that merely thinking about hypothesis is not an effective bias mitigation strategy.

So, in terms of qualifying expert witnesses, I think we need to conduct proficiency testing also not…formal education and experience is, of course, an essential part of qualifying expert witnesses, but proficiency testing is key, I think to know something about the actual skills of the practitioner in these subjects.

Christa: OK. I wanna go back to something that you said earlier about context. I know in 2019 you wrote about the need for context in digital forensics, but then in 2021, your research showed that context introduces bias. Can there be balance? Can bias mitigation help balance the two?

Nina: Well, thank you for this question, Christa, and this is a really important issue, so I’m glad I can elaborate a bit. I think the findings concerning contextual bias are important, but I think also it’s essential not to draw two hasty conclusions about which measures to apply to mitigate contextual bias in digital forensics.

We should have in mind that although there is substantial research based within the forensic science domain showing that context may lead forensic examiners astray, this is the first study focusing on digital forensic work. So that’s the first thing.

And I would also argue that digital forensics differs quite a lot from the majority of disciplines in the forensic science domain. And we do know if…we don’t know if bias mitigation measures that are effective for, let’s say a fingerprint examiner, are effective in the context of digital forensic work.

So, just adopting measures without doing research on whether they work, I think is a bit risky actually. And I think an intuitive measure would be to remove the task irrelevant context for the examiner, and thus remove the source of the problem.

However, stating what is task irrelevant in digital forensic examination is not necessarily straightforward, I would argue. And we would probably need to consider it on a case by case basis. So, stating something general about what is always irrelevant is not that easy, I think.

And we also need to consider the bias that might emerge when exploring the context rich evidence file. If you are having too little context to start off with, it’s like walking into a library, I would say.

So, there are some suggested measures, for example, the Linear Sequential Unmasking-Expanded by Dror and colleagues. And that aims to ensure that the information is evaluated before being informed about the context. However, we need more research to ensure that it’s feasible and that it works as intended in digital forensics.

And that doesn’t introduce other biases, for instance. So, so far I would actually emphasize the structured hypothesis driven approach, since it may not only mitigate bias, but also facilitate compliance with the normative obligation in investigative…or to investigate in a balanced manner towards the prosecution hypothesis and the defense hypothesis.

And of course also being transparent about the evidence that relates to the hypotheses and the conditioning information, that is taken into account and to allow for scrutiny. So, to prevent that poorly founded misleading results go under the radar. That’s what we really don’t want.

Christa: Right. I actually have a question in a little bit about Linear Sequential Unmasking and other methods, but we’ll come back to it. In your 2021 paper on digital forensics reporting, you highlighted legal decision makers’ lack of technical competence. How can they come better prepared to evaluate reports and processes as you’re just describing, as well as search warrant processing?

Nina: Yeah. In the complex world we live, we need experts. So, with specialized knowledge. And we will probably need even more. And the legal decision makers are experts in their own domain, law, obviously, and cannot be experts in all the expert domains they are dealing with in court, I think.

So the main responsibility is still to…for the digital forensic expert to justify that the process was performed in a forensically sound manner and that their result can be trusted, and to convey this in a understandable manner.

And this is of course challenging to be both clear and non-technical about this, conveying a very technical issue sometimes. But I think we cannot demand that the legal personnel have the same level of expertise, obviously, as our own experts.

That said, a minimum level of knowledge should be expected for those who are making legal decisions involving forensic evidence at a regular basis, I think. You don’t need to be an expert to ask relevant questions concerning procedures, or to challenge the interpretations or evaluations of the results.

And again, peer transparency is key. I think accurate documentation considering methods and tools and procedures is really important to enable them to ask these questions and also to be transparent about the error mitigation procedures that has been applied.

But my research unfortunately shows that such information very often is vaguely documented and often absent, actually.

Christa: So going back to what you were talking about earlier. We’re bringing in the technical aspect of explaining all of this information, that same paper described the Bayesian model of probability inference. Other authors have talked about the application of likelihood ratios. You mention Linear Sequential Unmasking and other mathematical constructs.

So kind of on that same note as the legal experts who are maybe not technical, is this realistic? Are those mathematical constructs realistic to introduce to juries?

Nina: Well my paper explored what types of conclusion that was used by the digital forensic examiners. And that was due to the discourse in the forensic science community, promoting the Bayesian model of probability inference, such as likelihood ratios.

And I was interested to see whether this methodology was applied in digital forensic reports. So that’s why I investigated that in my experiment also. I couldn’t find any trace of such conclusion types in my sample.

So, to your question: is it realistic for juries? To be honest, I’m a little bit on the fence about this issue, especially when it comes to jurors. I think the question here is whether we are comfortable with a juror not understanding the concept and basing their decisions on the question, “do I trust this expert?” versus whether we require that the juror’s decision is founded on understanding what the presented result is.

And yes, associated limitations and uncertainty and what it means to the question matters. So, personally, I think there are more jurors in the first category than the latter. If they are presented to an numerical value also concerning digital evidence and the…what complicates this issue even more is that jurors are asked not only to assess the individual pieces of evidence, they are asked to evaluate all the evidence in context.

And that means in a sense trying [to add up apples, pears and oranges such as] witness statements, document evidence, and then these numerical values and concerning forensic evidence. So considering the rule of law principle (the norm that supports the equality of all citizens before the law) I believe that it’s important that those who make the decisions about whether or not to declare someone guilty of a crime should base their decision on actual understanding the relevance and the credibility and the value of the evidence. That’s my personal opinion.

Christa: Do you think it’s possible that part of the problem is – and not with juries obviously, but with with all of the bias and the issues that we’ve discussed here – that part of that problem is practitioners’ own limits and knowledge of these different, whether mathematical constructs or other means of evaluation: digital evidence certainty descriptors, as you mentioned in one of your papers. How to improve this knowledge? Do we make it part of industry and vendor training to be for instance?

Nina: Well, my impression is that the knowledge about the Bayesian model of probability inference is still not very widespread in the digital forensic domain. I think in general, more attention should be devoted to reporting and presentation of results, including interpretation and evaluation of findings.

And I would argue that the attention in the training still is more focused on finding stuff. And less about to document and present them. So I think there’s room for some improvement there to highlight the necessity of better reporting, actually.

And I think actually before we start providing likelihood ratios on a regular basis, there must be a robust underpinning to ensure that the relevant results are obtained in the first place and that they are derived sufficiently tested methods and tools and procedures, and evaluated against the robust knowledge base.

So from my point of view, I think there are many issues to solve and I’m not sure if likelihood ratios is either the right starting point or the most urgent matter for the digital forensic discipline right now. But maybe for the future it’s relevant also to include it into their trainings also from the vendors and stuff.

But I think we need to focus more on the reporting and not just generating automatically generated reports, how to actually write them up yourself.

Christa: Sort of on a similar note: could tool vendors likewise support better verification towards improving transparency and making their products less of the black boxes that you mentioned a little bit ago in this conversation, as well as in two of your papers?

Nina: Well, yes, as you revealed, I guess, I lean towards establishing cross true transparency as opposed to creating multiple impenetrable layers of trust errors related to both technical and non-technical sources. Open source code is an important measure for verification and to increase the liability, I think. And I think we should move towards open source.

But also I think the vendors could assist their examiners by implementing effective logging functionalities, helping them to track their activities during the examinations at the analysis pages and the functionalities for contemporaneous notes, for instance.

In my jurisdiction, the digital investigation is done by the law enforcement on behalf of both the prosecution and the defense, and acquired image file would not be disclosed to the defense. And having access to informative contemporaneous notes would be very useful for the defense attorney to overlook that the investigation has covered issues raised by the defendant at the pretrial stage.

Christa: So going all the way back to my earlier question about the practical application of all of these measures. What do you think that sweeping changes would take? And I’m thinking of broad efforts to standardize the industry like federal level accreditation requirements in the United Kingdom, or the centralized Hansken Project in the Netherlands..?

Nina: Yes. That’s a big question, Christa! I think there is much to do. Every time we open a door here, we’ll find something to work with. There’s much to do on standardization and harmonization and increasing the scientificness, I think, within this discipline.

So there are many issues that needs to be solved both to ensure…to be able to obtain digital evidence effectively and to ensure high quality investigations. I also believe that there are some cultural changes that also need some attention. For example, the attitude towards the role of subjectivity and bias and how we think about error and so on.

So I don’t believe we can prevent all error. I think that we need error mitigation and peer review and verification on a case by case basis. And we cannot prevent ourselves out of these issues. Accreditation may be a part of the solution, but not the solution.

Christa: Right. Well, and I think some of the feedback coming out of the United Kingdom at this point after two, three years of the process reflects that.

Nina: Yeah, definitely. And I also think that we need to define minimum requirements for being a digital forensic examiner. It’s varying across the world right now who is an examiner and who is a novice or just examining digital content. So we need to make a clear distinction on who is the expert here.

And because right now, due to all the well designed software and products such as Hansken, many with very limited technological competence are able to review digital content and at the application level these days and that does not mean that they’re able to understand the limitations of the tools they are using and how to draw correct inferences from the traces they encounter.

So, this is where the digital forensic examiner skills are really needed to understand the data structures and to do advanced hypothesis driven examination and understand these limitations and uncertainties related to the procedures and the results. So I think, too, defining the roles is also an important measure to know who is actually the expert.

So, concerning sweeping changes, well, I would say there are some key issues that we already have touched upon. I think we need more practice oriented research and we need to put bias and human error on the digital forensic curricula. I think we need to apply error mitigation measures that not only include technical error, but also human error.

And I think we should start looking at error as a friend! Detecting them enables us to minimize consequences and also an opportunity to learning and improvement. So shifting the view could be beneficial, I think.

Christa: Yeah, you’d have to get past or encourage people to get past all of (I mean, nobody likes to admit we made a mistake), so, getting past that fear of the mistakes and any potential consequences would be really challenging, I imagine.

Nina: Yeah. And I think it’s an organizational issue. It’s not a personal issue for each and every digital forensic examiner. It’s a way of organizing the work around error and it shouldn’t be up to the individual digital forensic examiner. This should be a part of what we do, a part of the process, a natural part of…you’re not finalized the digital forensic process until you have done the verification and checked for errors and then you can deliver your product, I think.

Christa: Yeah. Well, Nina, thank you again for joining us on the Forensic Focus Podcast. It’s been a really good discussion.

Nina: Thank you for having me.

Christa: Thanks also to our listeners. You’ll be able to find this recording and transcription along with more articles, information and forums at www.forensicfocuspodcast.com. Stay safe and well.

1 thought on “Researcher Nina Sunde on Reducing Bias in Digital Forensic Analysis”

  1. Although the work of Nina and Itiel Dror on cognitive bias is importand and useful in the end the most effective way of identifying bias is by the employment of a defence expert to test the methods, assumptions and conclusions of a digital forensics practitioner. Certainly here in the UK all experts, including those instructed for the defence, have an over-riding duty to the court, not to whoever is paying the bills.

Leave a Comment

Latest Videos

This error message is only visible to WordPress admins

Important: No API Key Entered.

Many features are not available without adding an API Key. Please go to the YouTube Feeds settings page to add an API key after following these instructions.

Latest Articles