Digital forensics research last month fell fairly neatly into two categories, each of which sought to solve bigger problems in the field. In the first category is ensuring quality via frameworks such as service levels, better supporting first responders, reporting, and research.
The second category includes papers that examine various aspects of Internet of Things forensics, including a broad overview of current challenges and gaps, and specifics around drones, vehicles, and big data.
Ensuring quality through service levels and other frameworks
At Forensic Science International: Synergy, Ray A. Wickenheiser of the New York State Police Crime Laboratory System offered “Reimagining forensic science – The mission of the forensic laboratory.” Wickenheiser’s thesis addresses the problem of backlog — cases submitted to a lab that are either in progress, or awaiting analysis — in the wake of the COVID-19 pandemic.
Writing about forensic services in general, Wickenheiser suggested: “By eliminating the awaiting analysis backlog, analysis could begin immediately upon submission. This would provide analysis in as short a time as technology permitted, optimizing the value of forensic laboratory service.”
In a similar vein, Graeme Horsman, a lecturer at Teesside University, discussed the benefits of “Defining ‘service levels’ for digital forensic science organisations” at Forensic Science International: Digital Investigation.
Focusing on law enforcement agencies, Horsman suggested “a system of ‘Service Levels’, designed to transparently outline the different types of forensic capability that can be deployed within a given investigative scenario.” Defining and describing seven such levels, Horsman also offered a “service level allocator” decision model so a lab’s clients could choose an appropriate level for each investigation.
Of course, before a device ever makes it to a lab, a first responder has to evaluate its evidentiary value and priority before seizing it. “This task is far from straightforward where case outcomes can be determined by the quality of decision making deployed by the first responder in relation to the value placed upon any identified devices,” Horsman cautioned in his FSI:DI paper, “Decision support for first responders and digital device prioritisation.”
His solution: a “Device Evaluation and Prioritisation Statement (DEPS) proforma” including a scoresheet and case examples to support and document first responders’ efforts to evaluate “digital investigative opportunities” they encounter in the field.
On the other end of a digital forensic investigation, Horsman also discussed “The different types of reports produced in digital forensic investigations.” Published as commentary at Science & Justice, his paper offered what he called “scaffolding” for “reporting at ‘technical’, ‘investigative’ and ‘evaluative’ levels.”
Drawing attention to the idea that “[p]oor reporting practices in [digital forensics] are likely to undermine the reliability of evidence provided across this field,” Horsman drew on reporting types in other forensic sciences to discuss each type’s scope, content and construction requirements. Each, he wrote, “maintains a specific purpose and interpretative-context, determined by the examination workflow undertaken by a practitioner following client instruction.”
Once a device is submitted to a lab, ensuring its scientific value starts with mitigating the effects of human bias — including a tendency to believe that digital data is unerring. To this end, Karen Renaud of the U.K.’s University of Strathclyde, Ivano Bongiovanni of Australia’s University of Queensland, Sara Wilford of the U.K.’s De Montfort University, and Alastair Irons of the U.K.’s Sunderland University collaborated at Science & Justice on “PRECEPT-4-Justice: A Bias-Neutralising Framework for Digital Forensics Investigations,” a mitigation framework.
Improving a different form of reporting — research experimentation — but with the same goal of ensuring scientific rigor was “Experimentation of digital multimedia forensics: State of the art and research gaps,” a paper published at WIREs Forensic Science. In it, researchers in Brazil mapped and analyzed how digital multimedia forensics experiments have been conducted “and whether data-based evidence has been provided.”
The State University of Maringá’s Edson Oliveira Jr., Avelino F. Zorzo of the Pontifícia Universidade Católica do Rio Grande do Sul, and the University of Santa Cruz do Sul’s Charles V. Neu looked at 49 experiments published in digital forensics databases and conferences or workshops. They identified gaps in “the way experiments are reported, especially how data are shared to allow reproducibility and, consequently, evolution of the research topic.”
Internet of Things forensics… and forensics on various Things
At FSI:DI, researchers from the U.K.’s Staffordshire University and Loughborough University discussed “The complexity of internet of things forensics: A state-of-the-art review.” Pantaleon Lutta, Mohamed Sedky, Mohamed Hassan, Uchitha Jayawickrama, and Benhur Bakhtiari Bastaki highlighted key challenges and research gaps in IoT forensics via literature review.
Defining IoT fundamentals and applications, along with key factors affecting IoT forensics, the researchers also reviewed available IoT forensics frameworks, models, and methodologies for their practicality, concluding that much of the work is more theoretical. To that end, they described open challenges and requirements for IoT forensics, including more practical approaches.
At FSI:DI, researchers at India’s National Forensic Sciences University discussed their “Drone GPS data analysis for flight path reconstruction: A study on DJI, Parrot & Yuneec make drones.” Ravin Kumar, together with Animesh Kumar Agrawal, used open source tools to extract, analyze, and visually reconstruct three different drones’ flight logs and navigational data.
From there, they developed the “FlyLog Converter Tool” utility to process and convert Parrot UAVs’ flight logs from TXT/JSON to CSV formats so that their flight paths could be more readily visually represented.
At Champlain College’s Leahy Center, William Alber previewed his “Car Security Project Introduction.”* As the vehicle manufacturing industry inches closer to fully autonomous cars, Alber wrote, cars are being designed with built in security.
However, those designed within the past decade “can be the most vulnerable, as they weren’t designed with security in mind, and no new patches or updates are being delivered (or are even able to be delivered),” he added. Moreover, manufacturing standards don’t exist; ISO 21434, “the first set of rules and regulations related to cybersecurity for [auto] manufacturers,” is under development. Alber’s research intends to delve more deeply into these gaps.
What can be done with all this data? That’s the subject of “Handling Big Data in a large DFIR Case,”* research by Kaya Overholtzer and Ian Eubanks, also at the Leahy Center. Its goal: “to attempt to find a solution, or a set of solutions to address the issues of Big Data in forensics,” wrote Overholtzer.
Intended to fill a research gap, the project will explore open-source, commercial, and server environments’ built-in tools towards coming up with a practical approach to the identification, collection, analysis, and presentation of big data. Not only do the researchers want to use it to generate connections and timelines; they also want to streamline the process to save time.
Security is also the main topic of “Windows Kernel Hijacking Is Not an Option: MemoryRanger Comes to the Rescue Again,” independent research by Moscow, Russia-based Igor Korkin in the Journal of Digital Forensics, Security, and Law.
Korkin described three new kernel data hijacking attacks, which rely on bypassing Windows operating system (OS) security mechanisms. Then, he discussed the use of the updated MemoryRanger hypervisor to prevent these attacks by controlling access attempts to dynamically allocated data in the kernel.