Human complexity lies at the heart of so much of digital forensics – both the artifacts that offer insights into people’s motives, means, and opportunity to commit wrongdoing, and the investigations that reveal them.
Our research roundup for the end of 2021 features a number of articles that highlight this:
Exploring bias and its mitigation
September’s roundup explored new dimensions to reducing bias in forensic science, featuring in particular the work of two anthropologists: the University of West Florida’s Allysha Powanda Winburn and Texas State University’s Chaunesey M.J. Clemmons, who argued for “mitigated objectivity” in forensic sciences.
Since then, Forensic Science International: Synergy published both a letter in response to their original paper and their follow-up response to that letter.
In his letter to the editor, University Vienna’s Hans Ditrich argued that existing means of bias reduction are sufficient and showing progress, and that ultimately, “the pursuit of objectivity in generating valid results remains at the core of reaching a (hopefully fair) verdict” and should “remain in the responsibilities of forensic practitioners.”
Winburn and Clemmons, in their response, countered that mitigated objectivity “checks our subjectivity but does not force us to reject our humanity—it allows us to be not only scientists but people who do science.”
Agreeing that “quality-control practices like peer review and linear sequential unmasking can serve to curb biases—both implicit and explicit,” Winburn and Clemmons argued in favor of a more holistic approach to forensic science, making mitigated objectivity, “a more realistic and ethical approach that constrains the theory-laden nature of our data and the inherent subjectivities that we all bring to our analyses… [and] allows us to be full, emotionally and cognitively complex human beings without fear that our compassion will compromise our conclusions.”
Humanness is central to several other papers published at the end of 2021. “The benefits of errors during training,” authored by Dr. Heidi Eldridge of RTI International, Jon Stimac of the Oregon State Police Forensic Services Division, and John Vanderkolk of the Indiana State Police Laboratory (all in the United States), offered a literature review supporting learning from errors and how these principles could support forensic science today and into the future.
On the managerial side, “How to influence positive change? Managers’ involvement as emotional architects in the solution for relieving forensic examiners’ workplace stress” encouraged leaders to use the Emotion Regulation Skills-Abilities model (ERSA) to ensure a less stressful workplace environment. Author Donta S.Harper, a graduate of Argosy University, focused on understanding and developing the skills and abilities used during interpersonal communication, towards developing better managers.
Communication also factored in “Toward a common language for quality issues in forensic science,” where authors Anna L. Heavey, Gavin R. Turbett, Max M. Houck, and Simon W. Lewis echoed some of the points raised between Winburn, Clemmons, and Ditrich. Their argument: “the development of a common [interdisciplinary] language for quality issues in forensic science may provide the key to unlocking this crucial information to support collaboration, continuous improvement, and the fundamental understanding of “error” in forensic science.”
On a more practical level, Teesside University’s Graeme Horsman focused on the kind of decision making that underpins good stakeholder communication. “‘Scaffolding’ responses to digital forensic inquiries” addressed the kinds of case-specific investigative inquiries that can challenge forensic practitioners to offer reliable responses “derived from justifiable and robust processes and information, coined here as the underpinning ‘scaffolding’” including “past experience,” “targeted testing” or through reference to appropriate “existing bodies of knowledge.”
Of course, sometimes data can be the source of biases that could jeopardize solid cases. This was raised in “Challenges and possible severe legal consequences of application users identification from CNG-Logs,” authored by Toros University (Turkey)’s Furkan Gözükara.
There, Gözükara described “incorrect usage of CGNAT records” – Carrier-Grade Network (CGN) Address Translation allowing IP address sharing among multiple end users – to identify, rather than simply detect, criminal actors in mass terrorism investigations.
By investigating a set of cases in Turkey and comparing the CGNAT methods used there with the alternative methodology used in the EncroChat cases, the author argued that “while the precision of actor detection decreases [with CGNAT], the false positive rate increases.”
Triage: Decision-making and custom image acquisition
Horsman also published “Triaging digital device content at-scene: Formalising the decision-making process,” a professional practice report focused on “determining when it is actually appropriate to triage the contents of a device at-scene” – a complex decision with “multiple technical and procedural issues.” Horsman proposed a nine-stage triage decision model, which could promote consistent and transparent practice in the face of mounting data volumes.
At DFIR Review, Larry Jones described “Validation of X-Ways Forensics Evidence File Containers,” affirming that these containers are “a verifiable option for the purpose of creating custom data images from a digital device.”
Focusing on the need to narrow data volumes via triage or targeted acquisition, as well as the need to “limit the over-collection and over-sharing of irrelevant data,” Jones described how individual files included in XWF Evidence File Containers were verified and validated as exact matches to the same files obtained from the original source, and it was possible to validate hash value calculations. Subject to some limitations, creating custom data images from a digital device by converting a raw container to the E01 file extension is also a verifiable option.
Phishing and ransomware detection
Human factors again arise in “Don’t Bite the Bait: Phishing Attack for Internet Banking (E-Banking),” where ilker Kara of Turkey’s Cankiri Karatekin University explored the detection and analysis of an e-banking phishing attack.
Observing that human behavior continues to confound phishing prevention measures, Kara’s argument is that “real phishing attack studies are essential to study and analyze the attackers’ attack techniques and strategies” – and ultimately to trace attackers themselves.
Researchers in Nigeria additionally studied “Performance Assessment of some Phishing predictive models based on Minimal Feature corpus.” Orunsolu Abdul Abiodun, Kareem S.O, and Oladimeji G. B. of Abeokuta’s Moshood Abiola Polytechnic, together with Sodiya A.S of the Federal University of Agriculture, described a new approach to phishing detection via machine learning.
Comparing results from a variety of classifiers – Random Tree, Decision Tree, Artificial Neural Network, Support Vector Machine and Naïve Bayes – trained using a URL-oriented minimal feature set, the researchers found that the Random Tree classifier outperformed the others and encouraged future research to evaluate other machine learning techniques for phishing detection.
Machine learning also factored into research from Japan. “RanSAP: An open dataset of ransomware storage access patterns for training machine learning models” authored by
Manabu Hirano, Ryo Hodota, and Ryotaro Kobayashi, offers a publicly available dataset consisting of dynamic features of ransomware designed to address limitations in controlled research environments.
Their paper analyzes and evaluates the dataset in detail, including its limitations and a comparison with other dynamic analysis methods, and presents “a hypervisor-based monitoring system of storage access patterns followed by a design and an implementation of a feature extractor and machine learning models for ransomware detection.”
Memory forensics, cloud forensics, and video-based reconstruction
A team of researchers from the Singapore University of Technology and Design, Eurecom, and Cisco Systems examined “The evidence beyond the wall: Memory forensics in SGX environments.” Flavio Toffalini, Andrea Oliveri, Mariano Graziano, Jianying Zhou, and Davide Balzarotti explored the enclaves – “unobservable portions of memory… that physically screens software components from system tampering” – within the hardware-based Software Guard eXtensions (SGX).
Abdellah Akilal of Algeria’s Université de Bejaia, along with University College Dublin’s M-Tahar Kechadi, described “An improved forensic-by-design framework for cloud computing with systems engineering standard compliance.” Adopting a six-phase research methodology, the authors proposed a new system and software engineering-driven forensic-by-design framework to implement an improved “forensic-by-design” paradigm for cloud computing systems.
Finally, researchers from Slovakia’s University of Žilina published “Simulation-based reconstruction of traffic incidents from moving vehicle mono-camera.” Eduard Kolla, Veronika Adamová, and Peter Verta discussed a new method of collision or near-miss reconstruction: combining kinetic simulation, projective geometry, 3D point cloud, and video processing. The method offers a “physics-based,” accurate 3D reconstruction of general vehicle motion, enabling investigators to extract in-depth technical information about an incident from video footage.