The month of May saw a robust, diverse set of academic papers published. Tools and techniques are as present as ever — focusing that month on Windows 10 and mobile malware — but so are discussions about how to ensure their reliability and credibility both on their own, and as part of broad quality assurance mechanisms. Notable are:
- The first study of biasability and reliability in digital forensics (DF) decision making.
- Legal assessments in two papers of proposed methods’ reliability.
- Recommendations for improving practitioner practices.
- A model prototype for a cost-benefit analysis for a digital forensics laboratory quality program
A call for papers from the Forensic Science International family
The editors of the Forensic Science International (FSI) portfolio have issued a call for papers to be published in an ethics-oriented collection across its family of journals.
Noting that “reliable forensic research and practice has never been as relevant or as necessary” as it is now — given complications of the COVID-19 pandemic, including online misinformation, higher rates of lockdown-related domestic and gender-based violence, and a slowdown in general research work among others — the editors added:
“…such research must be grounded in ethical integrity, both in relation to scientific conduct and reporting. Ethics is essential in establishing quality and validity within the profession, as well as underpinning the FSI values of social justice and inclusion.”
For those engaged in digital forensics research and interested in publishing in Forensic Science International: Digital Investigation, Forensic Science International: Synergy, and/or Forensic Science International: Reports, some of the topics of interest include:
- collection, confidentiality and storage of identifying or private data
- data access, interrogation and forensic strategy development
- historical and contemporary social inequalities
- informed consent
- objectivity in reporting
- practitioner conduct and standards for forensic professionals
- tangential findings in forensic investigations
The goal: to “contribute to the establishment of a consistent and common ground across the FSI family for research and publication ethics… [and] may eventually lead to concrete policies adopted by each journal,” the editors wrote.
Digital forensics reliability, credibility, and quality standards
At FSI: Digital Investigation (FSI:DI), The Norwegian Police University College’s Nina Sunde and University College London’s Itiel Dror published the first study of biasability and reliability in digital forensics (DF) decision making, “A hierarchy of expert performance (HEP) applied to digital forensics: Reliability and biasability in digital forensics decision making.”
Building on prior work — including from other forensic sciences — examining the kinds of errors and uncertainties prevalent not just in digital evidence, but also human factors, Sunde and Dror noted: “… the quality of the outcome of the DF process – digital evidence – is dependent on cognitive and human factors, which can lead to bias and error.”
Sunde and Dror asked 53 digital forensics examiners to analyze the same evidence file. Controlling for biasability based on case context, the researchers found that context indeed biased examiner observations, and their reliability, or consistency, was low.
“For improving DF work, as well as for transparency, it is important to study and assess the biasability and reliability of [practitioner] decision making,” they concluded, identifying an additional need for research on bias mitigation and effective quality measures in digital forensics in order to minimize or detect errors.
Forensic reliability and credibility was also the subject of “Digital forensic tool verification: An evaluation of options for establishing trustworthiness,” authored by n-gate ltd.’s Angus Marshall in FSI:DI.
Noting that “The absence of evidence of tool verification increases the work required for method validation to comply with regulatory and international standards requirements,” Marshall examined options for disclosing digital forensic tool verification data, then evaluated the options in terms of cost and risk reduction for the level of trustworthiness, both end users and vendors. While he recommended a preferred option, Marshall stressed the need for additional real-world evaluation of all options.
“Reliability validation for file system interpretation” was the topic of an FSI:DI paper by researchers from the Norwegian University of Science and Technology, Norwegian Police University College, the Netherlands’ University of Groningen, and Sweden’s Stockholm University.
Rune Nordvik, Radina Stoykova, Katrin Franke, Stefan Axelsson, and Fergus Toolan explored a very specific method validation: file system reverse engineering and interpretation. “Currently, there is no standard procedure for [practical] reliability testing” of this method, the authors wrote.
Dual tool verification is not a reliable tool testing method, they argued, and peer review alone isn’t sufficient quality assurance in a law enforcement context. Their proposal: “a formal reliability validation procedure for file system reverse engineering, documenting the forensic process, including the tools used, ensuring reliability and reproducibility of the method and the results.”
At WIREs Forensic Science, Nicolas Hughes of the Harris County (Texas) Office of Managed Assigned Counsel, together with Houston Forensic Science Center experts Erika Ziemak, Carisa Martinez, and Peter Stout published “Toward a cost–benefit analysis of quality programs in digital forensic laboratories in the United States.”
Their focus: coming up with a model prototype for a cost-benefit analysis for a digital forensics laboratory quality program, including accreditation. Hypothesizing that “under realistic conditions, laboratory funders may realize savings by implementing an accredited quality program,” the authors used existing data for their model, which can help quantify quality metrics.
Finally, FSI: Synergy published the third in an ongoing set of letters debating “Vacuous standards – Subversion of the OSAC standards-development process.” A reply in response to criticism of the original article, this paper — written by the original “Vacuous standards” authors joined by additional contributors — encouraged courts to inquire whether forensic science standards are fit for purpose, and not to accept claims of scientific validity until those inquiries have been satisfied. Reiterating their support for the Organization of Scientific Area Committees for Forensic Science (OSAC), the authors encouraged standards developers and publishers to monitor and revise processes for suitability for purpose.
Improving practitioner practices
Also at FSI:DI was “A new model for forensic data extraction from encrypted mobile devices,” the result of a collaboration between the Netherlands Forensic Institute’s Aya Fukami and Zeno Geradts together with the University of Groningen’s Radina Stoykova.
An especially interesting perspective in this paper is its legal assessment of current forensic data extraction techniques, used to support the proposal for the new model: replacing the traditional mobile forensics model, which relies on extracted data types (logical, physical, etc.), with a new model comprising user secret-based acquisition, reverse engineering-based acquisition, and vulnerability exploitation-based acquisition.
At FSI:DI, “Exploring digital evidence recognition among front-line law enforcement officers at fatal crash scenes” filled a literature gap by determining first responders’ awareness of the importance of seizing digital evidence associated with fatal vehicle collisions.
Authors Thomas Holt, of Michigan State University, and Diana Dolliver, of the University of Alabama (United States), found that respondents do recognize the importance of mobile devices and vehicle telematics, but that they are less certain of the potential digital evidentiary value of other items such as fast food or makeup. The responses helped the researchers address police capacity and training needs.
Teesside University (United Kingdom)’s Graeme Horsman offered an outline to guide digital forensics practitioners in the creation of “Contemporaneous notes for digital forensic examinations,” his paper at FSI:DI. The increasing complexity of digital forensics work, he argued, makes these kinds of notes a necessity:
“Not only do contemporaneous notes support the practitioner as they conduct their case work through to the production of any accompanying report,” Horsman’s abstract reads, “but in some instances disclosure of this information may be required in order for all parties to evaluate any evidential findings fully.”
Windows 10 forensic research
Publishing also at WIREs Forensic Science, Horsman discussed “Standardizing digital forensic examination procedures: A look at Windows 10 in cases involving images depicting child sexual abuse.” Horsman offered an example standard using “models for defining operational practice which are offence-specific, device-specific and operating system-specific.” Using Windows 10 as an example, these models created minimum expected examination requirements.
At DFIR Review, Alexandros Vasilaras, Evangelos Dragonas, and Dimitrios Katsoulis, digital forensics examiners in Greece, explored “USB Forensics – Recover more Volume Serial Numbers (VSNs) with the Windows 10 Partition/Diagnostic Event Log.”
They wanted to find out how many Volume Serial Numbers (VSNs) of previously connected devices could be recovered from the new Windows Partition/Diagnostic event log. Their research developed a parsing tool for this recovery, automating the VSNs’ extraction as well as attributing LNK files and Jump Lists to a device by matching their VSNs to records in the event log.
Mobile device research
“The phone reveals your motion: Digital traces of walking, driving and other movements on iPhones,” authored by the Netherlands Forensic Institute’s Jan Peter van Zandwijk and Abdul Boztas, described some new timestamped digital traces of movement of iPhones are found not just in the iPhone Health App, but also in WhatsApp logfiles and the file cache_encryptedC.db.
Teased at the Digital Forensics Research Workshop (DFRWS) Europe 2021 in April, the paper describes how these traces can imply “information about actions performed by its user in the physical world.” The researchers’ focus: detecting periods of walking, running, and driving, as well as the moment of impact in drop tests.
The authors cautioned that more research on these traces is needed before conclusions can be properly drawn from them, but they show promise in terms of estimating traffic collisions and times of death, as well as “making probability statements about hypotheses in the form of a likelihood ratio in an Bayesian evaluative methodology.”
“Microsoft’s Your Phone environment from a digital forensic perspective,” authored at FSI:DI by the Polytechnic Institute of Leiria (Portugal)’s Patricio Domingues, Luis Miguel Andrade, and Miguel Frade, analyzed Your Phone Companion for Android and Your Phone for Windows 10.
Focusing on post mortem artifacts within the dual environment’s SQLite3 databases, the authors wanted to show that “Your Phone data left on a Windows 10 device can be useful to access a copy of messages, photos, and document interactions, especially when the Android device is inaccessible or even physically unavailable.” They also updated their YPA Autopsy module.
Mobile malware was the subject of “BLADE: Robust malware detection against obfuscation in android,” authored by Vikas Sihag, Manu Vardhan, and Pradeep Singh of the National Institute of Technology in Raipur, India and published at FSI:DI.
Basing their new obfuscation resilient malware detection system, BLADE, on Opcode Segments, the authors:
- Developed an Opcode Segment Document to characterize features resilient to obfuscation techniques.
- Performed semantics based simplification of dalvik opcodes to enhance BLADE’s resilience.
- Evaluated BLADE’s effectiveness against various obfuscation techniques.
A subset of mobile malware, mobile ransomware, was the focus on “RansomDroid: Forensic analysis and detection of Android Ransomware using unsupervised machine learning technique” published at FSI:DI.
Researchers Shweta Sharma and C. Rama Krishna of the National Institute of Technical Teachers Training and Research, together with Rakesh Kumar of the Central University of Haryana (India), highlighted limitations in supervised machine learning used to detect Android ransomware.
In particular, they wrote, the techniques rely on labeling from antivirus vendors and risk both misclassifying and failing to detect samples in real time. Their solution: unsupervised machine learning techniques via their proposed RansomDroid framework, which they said detects Android ransomware with an accuracy of 98.08%.
A roundup of memory forensics research
The team behind the Volatility Foundation blog, “Highlighting Research from the Next Generation of Memory Forensics Practitioners,” offered an update on their collaboration with Dr. Golden G. Richard III at the Louisiana State University (LSU) Center for Computation and Technology (CCT).
Noting, “These research efforts have been strongly focused on “gaps” in current analysis techniques; significant improvements of existing techniques; and efforts to classify and test the accuracy and reliability of existing tools,” the team’s post described research on the reliability of DFIR tools, automation of time-consuming manual processes, and memory forensics research gaps.
Be sure to read the entire Volatility Foundation blog post for links and additional details, as well as a description of the LSU’s entry into the National Science Foundation’s Scholarship for Service (SFS) program and how students may be able to benefit!