One of the most interesting and important things about digital forensics research is the way legal, cultural, and operational landscapes in different countries inform and even drive the research. In other words, digital technology doesn’t exist in a vacuum, even when the technology itself is the study subject.
These landscapes are apparent in October’s research roundup, where papers from Ghana and Brazil in particular describe unique challenges and opportunities. Of course, how investigations are conducted in one country could impact joint, cross-border efforts when it comes to international crime. But they could also affect resourcing for the kinds of technical investigations also described this month.
Opportunities for digital forensics collaboration in Ghana
So much digital forensics research comes from the developed world that it can be easy to overlook what’s happening in developing countries. Yet, as researchers in Ghana point out, that in itself is a problem.
In Forensic Science International (FSI): Synergy, “An overview of the digital forensic investigation infrastructure of Ghana” contrasts the amount of funding and attention given to digital forensics in developed countries, with the dearth of attention in the developing world — even though “cybercrime issues in Africa seem to be worse.”
In Ghana, however, digital forensics is still — in spite of advancements — in its infancy. That compounds the impact of cybercrime in Africa, which according to Richard Apau, from the Kwame Nkrumah University of Science and Technology, and Felix N. Koranteng, from the University of Education Winneba, results in annual losses worth millions of dollars.
“Existing legislations are scattered and cumbersome whereas mandated institutions lack the requisite capacity,” they concluded. To that end, numerous opportunities exist for collaboration and capacity-building.
In particular, course development and research projects are two areas where academic institutions and researchers in developed countries have an opportunity to include Ghanaian counterparts. Likewise, industry practitioners are encouraged “to foster stronger international partnership in fighting against borderless cyber fraud activities.”
Both are key pieces towards encouraging Ghanaian policymakers to improve the structures needed for improved digital forensics investigation.
The trustworthiness theme continues
Capacity in individual countries has a knock-on effect to the entire digital forensics field. Our recent roundups discussed papers that describe bringing more structure to digital evidence evaluations. This month the theme continues with four papers that focus on the trustworthiness of digital forensic tools, the methods used to evaluate them, and the need to revisit even the smallest pieces of hardware we can tend to take for granted.
In FSI: Digital Investigation, researchers from the Brazilian Federal Police proposed the “Peritus Framework: Towards multimedia evidence analysis uniformization in brazilian distributed forensic model.”
In Peritus, Daniel de O. Cunha, A. Silva, Jorge de A. Lambert, and Rafael O.Ribeiro devised a modular design to integrate many of the forensic tools used for day-to-day multimedia evidence analyses. Their reasoning: “increase efficiency, reinforce reproducibility, strengthen the chain of custody and achieve response uniformization throughout [Brazil].”
There, as in other countries, “forensic services are distributed between Federal and States’ jurisdictions and forensic analysts come from different backgrounds,” the authors wrote. As a result, responses from the different forensic units can vary widely, resulting in “variable quality and reliability.”
The Peritus framework is designed to be “a knowledge center for developed tools and a methodological guideline” allowing investigators to focus their efforts on technical and evaluative aspects of their job. That’s important as multimedia volume and variety continues to increase; Peritus’ job is to normalize, integrate, and document data across its integrated tools.
“Our experience and the feedback received show that the natural flow and the integration provided by the presented system significantly improves efficiency and the robustness of the obtained results,” the authors concluded.
In FSI: Science & Justice, meanwhile, researchers from the Islamic University of Madinah (Saudi Arabia) and the University of Kashmir (India) asked: Can computer forensic tools be trusted in digital investigations?
Wasim Ahmad Bhat, Ali Al Zahrani, and Mohamad Ahtisham Wani focused on whether four standard computer forensic tools would recognize anti-forensic file system attacks such as artefact wiping and trial obfuscation attacks, and could “extract complete and credible digital evidence from these digital crime scenes.”
Using an accepted black-box forensic tool testing methodology, the researchers found that tools couldn’t, in fact, identify most attacks. When they could — as with encryption and steganography attacks — they didn’t collect the evidence.
“These results imply that evidences collected by CFTs in digital investigations are not complete and credible in the presence of AF attacks,” the researchers wrote, suggesting “that practitioners and academicians should not absolutely rely on CFTs for evidence extraction from a digital crime scene.”
The theme of trust also arose in the Journal of Digital Forensics, Security & Law (JDFSL), where The Open University’s Ian M. Kennedy, Blaine Price, and Arosha Bandara discussed “Towards Increasing Trust In Expert Evidence Derived From Malware Forensic Tools.”
These authors focused on making a malware forensic investigation more scientific, specifically around selecting and evaluating dynamic malware analysis tools. Their framework, the Malware Analysis Tool Evaluation Framework (MATEF), takes into account “the literature, legal, regulatory and practical needs” associated with this practice.
Finally: can the point of sale (PoS) devices so many people rely on for credit card purchases be trusted? Also in the JDFSL this month, Stephen Larson, James Jones, and Jim Swauger took “A Forensic First Look at a POS Device: Searching for PCI DSS Data Storage Violations.”
They sought to extract unencrypted data so that they could identify whether the Payment Card Industry Data Security Standard (PCI DSS) requirement to protect stored cardholder data had been violated.
Their research identified no such violations, but demonstrates the importance of examining long-established devices, their storage mechanisms, and their code. As the authors wrote: “The confirmation that the POS systems examined keep our payment card information encrypted is welcome news as payment cards are still very much in use in our daily activities.”
Research continues on emerging devices
“New” technology is still the focus of much research, of course. In FSI: Digital Investigation, two papers look at the investigation of Internet of Things (IoT) devices, and using in-vehicle data to identify a suspect based on their driving behavior.
In their paper “Stitcher: Correlating digital forensic evidence on internet-of-things devices,” researchers from the Singapore University of Technology and Design and ShanghaiTech University surveyed 39 digital forensic public and private sector investigators.
Finding that IoT devices’ volume and variety compound existing digital forensic challenges with correlation and consistency, Yee Ching Tok, Chundong Wang, and Sudipta Chattopadhyay created a new tool, Stitcher, which addresses these challenges.
They then invited survey participants to apply Stitcher to a simulated sophisticated IoT criminal investigation. “96.2% of users indicated that Stitcher assisted them in handling the crime, and 61.5% of users who used Stitcher with its full features solved the crime completely,” the authors wrote.
Meanwhile, “Driver identification using in-vehicle digital data in the forensic context of a hit and run accident” sought to find out whether drivers could be positively identified based on in-vehicle digital data that captured the drivers’ natural driving behavior.
Researchers Klara Dološ, Conrad Meyer, Andreas Attenberger, and Jessica Steinberger, all from the Munich, Germany-based Central Office for Information Technology in the Security Sector (ZITiS), trained and validated a machine learning model using “freely available” driving behavior data to classify drivers.
From there, they formulated a hit and run collision scenario with three known suspects. The ultimate goal was to answer the forensic questions: Which suspect did the evidence most likely indicate, and how certain was that claim? Model accuracy, false detection rate (FDR), and random match probability (RMP) all factored into the research.