Research Roundup: Communicating Uncertainty In Digital Forensics Results

Recently, we reported on two papers that described ways to reduce the risk of mistaken interpretations of digital evidence. Evaluating the uncertainty of evidence, the authors wrote, could bring more structure — and trust — to digital forensics.

We continue this theme with a set of papers that continue to explore reducing that risk. As the conversation continues around importing some principles and practices from other forensic sciences, we’ve included summaries of a few papers published in that area, as well.

Your chance to help with research

“Giving back” was a strong theme at this year’s digital forensics conferences (though the concept goes back at least to 2018). If you’ve been looking for a way to contribute, consider responding to a research survey!

Elénore Ryser, a PhD candidate at the School of Criminal Science in Lausanne and lead author of last month’s “Structured decision making in investigations involving digital and multimedia evidence,” is at work on a new project.

The use of digital traces in court” intends “to build an adequate format and methodology to improve the communication between digital forensic science practitioners and law practitioners.” To that end, through November 30, Ryser seeks respondents from both legal and digital forensics experts to a survey to find out about:


Get The Latest DFIR News

Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month.


Unsubscribe any time. We respect your privacy - read our privacy policy.

  • The familiarity of legal specialists with the digital trace (attorneys: access this survey here)
  • The use of the digital trace during court or other legal process (forensic examiners: access this survey here)
  • The methods and means of communication used by specialists in digital forensics to communicate their results to the judiciary.

Pointing to the introduction of the U.S. Forensic Science Research and Standards Act of 2020, Ryser observed in a message: “The awareness around this subject has considerably risen since and in the last year, papers have been published about ways to evaluate digital traces and procedures to communicate the results in front of a court.”

However, she added, research into how this is actually being done, much less ways to improve the process, doesn’t appear to have been done. “This is really important, as a loss of confidence of the legal community in digital traces is worrisome,” she said. “The lack of forensic theoretical bases, the complexity and rapid changes in computer systems are among other reasons why courts might be reluctant to consider digital evidence.

“However, it is called upon to appear more and more frequently in a court of law; smartphones are already ubiquitous in our lives and connected objects will bring even more opportunities to record human activity. It is therefore important that such a resource is neither completely banned from the courts, nor used without a safeguard. To do this, communication between forensics and lawyers is essential.”

After closing the survey at the end of November, Ryser plans to publish a summary on her website by spring 2021 and to publish the research formally later next year.

Digital forensics trace evidence studies

For examples of digital trace evidence in practice, the journal Science & Justice, as well as DFIR Review, have published three works:

At Science & Justice, Teesside University’s Graeme Horsman studied 16 anonymous / temporary file transfer services for “A case study on anonymised sharing platforms and digital trace evidence left by their usage.” Because the files may not remain on local storage media, the digital traces each service leaves behind can help to identify when it’s been in use. Horsman also pointed out that “identifying the use of a service may also expose networks of illegal file distribution.”

At DFIR Review, Ryan Benson’s research on TikTok timestamps sought to answer the question: When was a video posted on TikTok? By comparing TikTok and Twitter URLs, Benson was able to figure out how to decode timestamps from post IDs — “a timestamp that was easier to retrieve, hard for TikTok to remove, and can be found even for deleted or private videos.”

Also at DFIR Review, Larry Jones wanted to determine whether link file and jump list “saved, copied, and moved” artifacts had changed in Windows 10. Jones had noticed “a disparity in the number of LNK files when compared to the Jump List DestList entries” relative to those on Windows 7 systems. “In addition to the traditional user file and folder access, Windows 10 has expanded, in limited circumstances, the documenting of user file and folder activity,” Jones concluded. He outlines these artifacts and their unique behaviors extensively in his article.

Improving forensic science at large

In September, a number of papers were published seeking to improve forensic science as a whole — which could have knock-on effects to digital forensics.

“Known or potential error rates” are a key part of admissibility hearings for scientific evidence in U.S. courtrooms, but as authors Itiel Dror and Nicholas Scurich describe in their paper, “The (mis)use of scientific measurements in forensic science,” aren’t well understood.

To that end, they wrote: “Forensic science techniques have repeatedly passed the Daubert standard for admissibility, even when they have no properly established error rates and… experts have implausibly claimed that the error rate is zero.”

Their focus in this paper was inconclusive decisions, which they said could be either correct or incorrect. While they don’t explicitly call out digital forensics, the Scientific Working Group on Digital Evidence (SWGDE) has documented issues with error rates, which we covered as well in a recent podcast.

Proficiency testing could be part of the oversight described in the SWGDE document. That was the subject of “Implementing blind proficiency testing in forensic laboratories,” where authors Robin Mejia, Maria Cuellar, and Jeff Salyards stated: “Regular proficiency testing of forensic examiners is required at accredited laboratories and widely accepted as an important component of a functioning quality assurance program.”

Blind proficiency tests, they wrote, have a distinct advantage over the declared proficiency tests in use in most labs: “They must resemble actual cases, can test the entire laboratory pipeline, avoid changes in behavior from an examiner knowing they are being tested, and are one of the only methods that can detect misconduct.”

Acknowledging that “both logistical and cultural obstacles to the implementation of blind proficiency tests” exist, the authors described eight key challenges and suggested solutions. Their conclusion: “Implementing blind proficiency testing has the potential to enable better accountability for forensic laboratories, and it could lead to reducing errors in forensic science.”

Another part of reducing errors is better research methodology. To that end, Jason Chin, Rory McFadden, and Gary Edmond argued for the use of registered reports in forensic science, which they said “…. flip the peer review process, with reviewers evaluating proposed methods, rather than the data and findings.”

This structure, they continued, reduces the risk of exaggeration or manipulation of results, along with the risk of publication bias — in other words, “ensuring studies with null or otherwise unfavorable results are published.”

While it’s up to forensic journals to implement registered reports, digital forensics students submitting theses or dissertations might encounter them — or ultimately even join the effort to make them more accepted.

Going deeper on evidence evaluation and standards

In Science & Justice, authors Simon Cole and Matt Barno, both of the Department of Criminology, Law & Society at the University of California-Irvine, conducted a baseline study of probabilistic reporting in criminal cases in the United States.

Their research focused on evidence from friction ridge prints, firearms, toolmarks, questioned documents, and shoeprints — not digital evidence. However, the problem they observed — that forensic reports in general use categorical statements of certainty, not metrics of uncertainty — may ring familiar to digital forensics experts.

Of course, many judges are willing to accept inexact terminology — especially if it means less complicated testimony. At the same time, many academics struggle to determine how to apply statistics like likelihood ratios to digital evidence. And forensic reporting itself lacks a standard. Meanwhile, experts still have to go to court and testify. It’s little wonder that categorical verbal scales (and the potential for miscommunication) remain.

Some research has tried to address this problem directly. Earlier this year, in his paper “Digital Evidence Certainty Descriptors (DECDs)” Graeme Horsman offered the framework of a verbal scale to help communicate uncertainty in digital evidence. Quantification, he wrote, “may not actually be possible due to the intricacies of digital data and the difficulties involved with the fine-grained interpretation of events.”

But this approach, wrote Alex Biedermann and Kyriakos Kotsoglou for FSI: Synergy, overrelies on the subjective application of “ordinary words” to express uncertainty. “… ordinary words are notions on which the fact-finders will decide based upon their own experience of ordinary life,” they wrote in “Digital evidence exceptionalism? A review and discussion of conceptual hurdles in digital evidence transformation.”

They continued: “Experts should therefore refrain from opining on their meaning, for they are almost by definition non-experts on the meaning of ordinary words.” Instead, they argued for “justified probability assignments” based on an individual practitioner’s personal body of knowledge and experience. That could include approaches from other sciences, mathematics, statistics, and decision science overall.

The point, they wrote, isn’t to teach probability theory in the courtroom, but instead, to clarify “the way in which forensic scientists should make up their mind when dealing with uncertainty and assigning a value to their findings…. that is focusing on the probability of the findings given a pair of competing propositions.”

Foundational to Biedermann and Kotsoglou’s argument, however, was the assumption that standards exist, providing a baseline that uncertainty could be measured against: “It is deeply undesirable and deleterious especially for the coherence of a legal system to tolerate a practice wherein different expert witnesses assess and articulate uncertainty in radically different ways.”

That’s the problem described in “Vacuous standards – Subversion of the OSAC standards-development process.” There, authors Geoffrey Stewart Morrison, Cedric Neumann, and Patrick Henry Geoghegan focused on American National Standards Institute / American Academy of Forensic Science Standards Board (ANSI/ASB) standards around bloodstain analysis to argue that some of the initial standards introduced by the National Institute of Standards & Technology’s Organization of Scientific Area Committee (OSAC) could actually harm the improvement of the scientific validity of forensic practice.

Echoing Cole and Barno’s observations about reporting standards, this paper expresses the authors’ concerns around vagueness, a low compliance bar, compliance that would be insufficient to lead to “scientifically valid results,” and few requirements.

All were cited as vacuous standards that “appear to be designed to allow laboratories and practitioners to continue with existing poor practice, and if challenged to be able to respond that they are following established standards…. There is a danger… that a court may not look further than the fact that a standard exists, and be misled into believing that conformity to a vacuous standard is indicative of scientific validity, even though it is not.”

For example, ANSI/ASB 030 only requires forensic science providers to have a series of written procedures. Echoing Biedermann and Kotsoglou, Morrison et al. wrote: “It leaves the content of those written procedures almost entirely to the discretion of each individual forensic science provider.”

These are real dangers for digital forensics practitioners and expert witnesses, as well. In the U.S., for instance, many continue to debate the value of standardization according to ISO 17025 and/or others over concerns that the process isn’t fit for digital forensic methods.

At the same time, however, wrote Biedermann and Kotsoglou: “…as a currently developing new branch of forensic science, [digital forensics] has a unique opportunity not to commit the failures and shortcomings in evidence interpretation that (continue to) affect traditional forensic disciplines.” Watch this space for additional research on digital forensic evidence evaluations.

Christa Miller is a Content Manager at Forensic Focus. She specializes in writing about technology and criminal justice, with particular interest in issues related to digital evidence and cyber law.

Leave a Comment

Latest Videos

This error message is only visible to WordPress admins

Important: No API Key Entered.

Many features are not available without adding an API Key. Please go to the YouTube Feeds settings page to add an API key after following these instructions.

Latest Articles