Digital Forensic Evidence And Artifacts: Recent News And Research

This month’s academic research reflects two aspects of the changing digital forensics industry: new ways to think not just about digital artifacts, but also about broader investigative processes — including interagency cooperation.

This round-up article includes three open-access articles from the in-progress August and December issues of the journal Forensic Science International: Digital Investigation:

  • Structured decision making in investigations involving digital and multimedia evidence (Ryser, Spichinger, Casey)
  • The role of evaluations in reaching decisions using automated systems supporting forensic analysis (Bollé, Casey, Jacquet)
  • Digital forensics as a service: Stepping up the game (van Beek, van den Bos, Boztas, van Eijk, Schramp, Ugen)

Also included are two pieces of research focusing on artifacts:

  • The paper “A Two-Stage Model for Social Network Investigations in Digital Forensics” (David, Morris, Appleby-Thomas) available from the Journal of Digital Forensics, Security and Law.
  • DFIR Review published “Parsing Google’s Now Playing History on Pixel Devices,” in which Kevin Pagano asks what information is recoverable from the use of the Now Playing feature on Google Pixel phones.

Bringing structure to forensic evidence evaluations

One of the most significant themes to have emerged in recent academic research is the need to improve transparency and ultimately, trust in digital forensic evidence. Beyond standardizing some aspects of the field, other proposals aim to take a more structured scientific approach to digital forensic evidence.

Pointing out that the results of forensic examination support many different kinds of decision-making during an investigation — operational, legal, and so forth — “Structured decision making in investigations involving digital and multimedia evidence,” authored by Elénore Ryser, Hannes Spichiger, and Eoghan Casey, proposes “a logically structured framework for scientific interpretation.”


Get The Latest DFIR News

Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month.

Unsubscribe any time. We respect your privacy - read our privacy policy.


This kind of framework, applied at all stages of the investigative process, could reduce the risk of mistakes stemming from “information overload, inaccuracy, error and bias.” That’s because decisions throughout investigations are based on limited information at any given point.

However, the authors wrote, forensic examiners can manage the uncertainty introduced by these limits. Relying on a hypothetical case of source camera identification based on real-world investigations, their framework offers a way to evaluate different explanations for the presence or absence of information.

By assigning different values to uncertain data, their decision-making — or the data they provide to investigators to make decisions — can improve. It lessens the risk of overestimating the reliability of digital and multimedia evidence, and makes the evidence, well, more “forensic.”

That’s even more important as automation begins to be implemented, as authors Timothy Bollé, Eoghan Casey, and Maëlig Jacquet describe in “The role of evaluations in reaching decisions using automated systems supporting forensic analysis.”

Their paper extends the concept of structured evaluation from human analysis to automated systems (including, but not limited to, those relying on machine learning approaches; for example, the algorithms that classify child exploitation material, identify faces, or detect links between related crimes).

Besides reducing the risk of undetected errors or bias, evaluating these systems’ outputs could improve their performance, understandability, and the forensic soundness of decisions made using them. To that end, the authors provided a set of recommendations for automated forensic system design:

  • System performance should be evaluable based on whether the system is fit for purpose for a given forensic question. For example, the authors wrote, a facial recognition system might show whether an image or frame contains an object or a person, but cannot necessarily identify a specific object or person. 
  • Automated systems that support forensic analysis should be designed with understandability and transparency “baked in.”
  • One way to do this is for the system to “guide users through the forensic evaluation and decision making steps to be sure that they can understand and explain the result in a clear, complete, correct and consistent manner.”
  • The context for information should be retained throughout examination.
  • It should be possible to formulate explicit hypotheses, with any automated steps clearly described.

These recommendations provide a foundation for an automated system whose results are easier to evaluate in more structured ways.

Digital forensics as a service (DFaaS) goes international

For FSI:DI’s December issue, in “Digital forensics as a service: Stepping up the game,” coauthors H.M.A. van Beek, J. van den Bos, A. Boztas, E.J. van Eijk, R. Schramp, and M. Ugen described how the Netherlands Forensic Institute (NFI) implemented “digital forensics as a service” (DFaas) via the Hansken platform.

The paper — the third and final of three about DFaaS — highlights how DFaaS supports digital forensic knowledge sharing, digital traces’ contextualization, and standardization.

Implemented since 2010 — first under the name “xiraf” — Hansken was designed to minimize case lead time, maximize coverage of seized digital data, and efficiently mobilize specialists, all in a centralized environment based on security, privacy and transparency principles. 

As of 2019, Hansken includes law enforcement agencies from outside the Netherlands. Having withstood a 2016 judicial review, the platform is now used in over 1000 cases, 100 of which are being investigated concurrently. Among these are cases with data from more than 1000 devices and 100+ terabytes of raw material.

Lessons learned “from an organizational, operational and development perspective in a forensic and legal context” include:

  • The DFaaS business case is hard to make owing to hidden operational costs associated with transitioning from a traditional way of working to a centralized mode, making it difficult to measure and compare the costs.
  • All users’ needs must be taken into account so that they can experience the kinds of benefits that will enable them to embrace the changes to their process, especially when that involves relinquishing control over some parts of the process.
  • Working in an agile way in a broad governmental context is difficult owing to bureaucracy. Detailed documentation is needed to record benefits and mitigate risks, and decision making can be slow: the opposite of agile.
  • Continuous development — where new features become available once every few days or weeks, and changes are made to underlying third-party technologies — can frustrate forensic investigations.
  • A monolithic platform will never support all case-specific needs because the pace of technological change itself is too rapid to keep up with. The authors found that open source or commercial tools and case-specific scripts were still needed to process the digital evidence.
  • A DFaaS platform does not replace a digital forensics expert. Because Hansken “brings [tactical] digital evidence in reach of laymen,” the authors cautioned the continued need for critical reviews of the way forensic artifacts are interpreted and labeled.
  • DFaaS must serve all stakeholders in a criminal case, including the defense, taking into account the need to limit access for contraband or seized digital currency.

New approaches to social media forensic analysis and an old(-ish) Google Pixel artifact

At the Journal of Digital Forensics, Security and Law, Cranfield University’s Anne David, Sarah Morris, and Gareth Appleby-Thomas propose “A Two-Stage Model for Social Network Investigations in Digital Forensics” to identify and contextualize features from social networking activity artifacts. 

Their model focuses on understanding a user’s browser activity and the types of artifacts it can generate, and how activity and artifacts are linked.

First, URLs are identified and recovered from disk, and their features — such as the social network site visited or the actions performed by the user (search, follow), which “can be used to infer user activity or allude to the user’s intent” — extracted from them.

In the second stage, artifacts are corroborated, adding supplementary information about the URL feature extraction artifacts.

The outcomes, the authors wrote, include:

  • URL feature extraction can help to prioritize social network artifacts for further analysis
  • Determine social connections or relationships in an investigative context
  • Discover how recovered artifacts came to be, and how they can successfully be used as evidence in court

At DFIR Review, Kevin Pagano explores in “Parsing Google’s Now Playing History on Pixel Devices” what information is recoverable from the use of the little-discussed Now Playing feature on Google Pixel phones.

Part of the Pixel 2 and Pixel 2XL launch in 2017 and now included in every Pixel phone release since, Now Playing is a “baked in app/feature” that allows Google to recognize music playing nearby. With this history stored locally, Now Playing can offer valuable pattern of life data.

More from FSI: Digital Investigation

For subscribers, the August issue of FSI:DI also includes:

  • The challenge of identifying historic ‘private browsing’ sessions on suspect devices (Horsman)
  • A survey on digital camera identification methods (Bernacki)
  • Detecting child sexual abuse material: A comprehensive survey (Lee, Ermakova, Ververis, & Fabian)
  • Forensic speaker recognition: A new method based on extracting accent and language information from short utterances (Saleem, Subhan, Naseer, Bais, Imtiaz)
  • Smart contracts applied to a functional architecture for storage and maintenance of digital chain of custody using blockchain (Petroni, Gonçalves, de Arruda Ignácio, Reis, Martins) (Note: for more about the “blockchain of custody,” the Council of Europe’s Project LOCARD published a description of how blockchain technology is useful for digital evidence processing.)
  • Digital forensic tools: Recent advances and enhancing the status quo (Wu, Breitinger, O’Shaughnessy)

The December issue is in progress, but so far also includes:

  • Towards a conceptual model for promoting digital forensics experiments (Oliveira Jr, Zorzo, Neu)
  • A study on the decryption methods of telegram X and BBM-Enterprise databases in mobile and PC (G. Kim, M. Park, Lee, Y. Park, J. Kim)
  • A blockchain based solution for the custody of digital files in forensic medicine (Lusetti, Salsi, Dallatana)

Finally, JDFSL also included a paper on cryptography, passwords, and the U.S. Constitution’s Fifth Amendment; we’ll cover this in our upcoming quarterly Forensic Focus Legal Update.

Christa Miller is a Content Manager at Forensic Focus. She specializes in writing about technology and criminal justice, with particular interest in issues related to digital evidence and cyber law.

Leave a Comment