A recap of Dr Gillian Tully’s DFRWS EU keynote. By Christa Miller, Forensic Focus
No one denies that digital forensics is becoming more complex. The range of devices to examine keeps increasing, as does the complexity of acquiring and analyzing data.
For example, most forensic examiners accept that data associated with a single Internet of Things (IoT) device — a drone, home assistant, or wearable technology — might exist separately on the device itself, the mobile device(s) used to operate it, and the cloud.
Commercial tool vendors admit that even they can’t keep up with the pace of change in digital technology. Too many apps and devices are available. Vendors encourage their users to test and validate forensics methods for themselves on, say, unsupported apps.
How this is done, though, depends largely on the users themselves — and therefore, on the resources they have to do it. Digital forensic examiners’ time, funding, and experience all impact their capacity to perform quality assurance.
In her keynote address “Risk, Quality Assurance and Innovation in Digital Forensics,” delivered at the Digital Forensics Research Workshop (DFRWS) Virtual Europe 2020 annual conference, Dr Gillian Tully addressed these issues. Dr Tully is a forensic scientist with nearly 30 years of experience, who as the United Kingdom’s Forensic Science Regulator is now responsible for setting quality standards for forensic science in the Criminal Justice System.
What’s Being Standardized, And Why
Tully drew on the results of a research study, the result of a collaborative effort between her team, the Defence Science & Technology Laboratory (Dstl), and academic contributors. Published in Forensic Science International: Digital Investigation, “Quality standards for digital forensics: Learning from experience in England & Wales” describes the effort “to review available data to determine the extent to which accreditation to ISO/IEC 17025 is addressing quality issues in digital forensics and consider what changes and resources could be made available to assist with implementation of quality systems,” according to the paper’s abstract.
The context for the research is what Tully called a “cottage industry” of about 60 different legal entities among English and Welsh law enforcement alone, each performing some degree of digital forensics between labs and kiosks, each of which relies on its own selection of software tools.
Another challenge, she said: labs in England and Wales frequently extract data but don’t necessarily analyze it. Instead, they pass it to investigators to interpret — even though these investigators lack access to advanced tools and aren’t trained in methodology.
The result, Tully said, was the risk — and the reality — of failures to find exculpatory evidence. One of the most egregious examples happened in late 2017, when “a combination of error, lack of challenge, and lack of knowledge” converged in a defendant, Liam Allan, being falsely charged with rape.
The officer in charge downloaded the contents of the alleged victim’s mobile phone without recording his methods or informing prosecutors. In turn, neither the detective inspector nor the prosecutors she worked with asked about the phone evidence.
As a result, exculpatory messages were not found until the defendant’s counsel obtained and examined the downloads after the trial had already begun.
Around that same time period — October 2017 — the UK had already begun to require digital forensics labs to be accredited to the ISO/IEC 17025:2017 standard, “General requirements for the competence of testing and calibration laboratories.”
The idea was to instill quality assurance measures labs need to stand behind their results and demonstrate appropriate evidentiary provenance, continuity and validity across extraction, analysis, and interpretation of digital data. On the other hand, Tully said, quality assurance is challenged by a variety of issues including the volume of data, encryption, procedural requirements such as disclosure (in the US, discovery) process, and skills shortages.
Furthermore, Tully said in a follow-up statement, accreditation likely would not have made a difference for R v Allan. “One of the outstanding issues is that the standards apply to digital forensics, but much of the analysis and interpretation is actually performed by officers, rather than in [digital forensic units],” she said. Addressing this would demand wider organizational change.
In her keynote, Tully said implementing standards could appear to lead to more issues, but in reality, the process merely uncovers what was already there. “The worst sort of error is unreported error,” she said, stressing that routine and systematic monitoring make it possible to act and improve.
Indeed, the Regulator’s paper “Quality standards for digital forensics” highlighted that the number of overall findings — failures to conform to standard — “was greatly reduced compared to the number raised at the initial assessments.”
That said, numerous risks and issues came up in this second assessment:
- Quality risks and issues included numerous “human error” type issues. Among them: poor exhibit handling and inadequate continuity, competence failures including outright mistakes and inadequate note-taking, and — most controversial, Tully said, because it isn’t quick and easy — validation failures such as using “methods not fit for purpose.”
- Cyber security weaknesses included inappropriate internet connections in labs that were supposed to be blocked.
- Tool errors and technical failures, such as the attribution of deleted data.
- “DIY” risks included the misinterpretation of output from what Tully called “clever software.”
- Failings in disclosure — whether of the extraction, search, or relevance — included methodology limits, proportionality and data volume, using the wrong search terms, and the weight given to privacy rights vs. fair-trial rights.
As digital forensics labs in the UK work on improving these and other issues, law enforcement agencies are preparing to standardize fully mobile front-line data extractions that have no fixed site and rely on self-service “kiosk” technology. These deployments will adhere to the ISO/IEC 17020:2012 standard, “General criteria for the operation of various types of bodies performing inspection.”
This effort’s original October 2020 requirement is currently deferred owing to the COVID-19 pandemic. In the meantime, kiosks and other tools and methods can be used by frontline non-practitioners. Still, Tully said, the organization needs to hold ISO 17025 accreditation for at least one deployment, and needs to maintain configuration control and staff competency records.
In addition, the “Quality standards for digital forensics” report noted the need to assess ISO/IEC 17020 and ISO/IEC 17025 implementation together “along with the assessment approach to ensure methods are fit for purpose.”
Moving From The “What” And “Why” To The “How”
In the UK, accreditation goes hand in hand with the Forensic Regulator’s Codes of Practice and Conduct, a document that lays out scientific and professional expectations for all forensic analysts.
In other words, if the ISO standard is the “what,” the Codes are the “how” — leaving room, Tully said, for innovation, a necessary feature when the pace of technological change means labs may need to come up with their own solutions.
In fact, she added, quality standards can be implemented in many ways. Done well, with appropriate leadership and support, they can lead to great improvement; done poorly, however, they might amount to nothing more than assigned tasks, box-ticking, and progress that grinds to a halt.
Ultimately, Tully said, implementing standards is about infusing digital forensic extraction, analysis, and interpretation with the scientific method — the foundation that trained, competent professionals need to meet any challenges they encounter.