Standardisation is currently the subject of animated discussion among digital forensic examiners worldwide. In this opinion piece, Rich2005 looks at the challenges of the ISO17025 standard for digital forensics and why it might not be the best choice for the field. Please note that the views contained within this article are the opinions of its author and do not necessarily reflect the views of Forensic Focus.
In my opinion ISO17025 is a dangerous standard. It gives the illusion of accuracy and reliability, whilst in the real world it may actually lead to poorer results via static process-following, bulk-evidence production, and the assumption of reliable results at the expense of properly considering the complexities of individual cases.
You only have to read the changelogs for all the main forensics tools, which come out daily/weekly/monthly, and will have passed someone’s ISO testing, to know that they were never as reliable as that “tested” tool was purported to be. In fact, whatever baseline testing you do, there’s no assurance that just because you’ve tested it once, and it has passed, that it will do so the second time, with the same set of data, or a different set of data.
To use a standard laboratory example: testing for the presence/value of one single marker, with one single method, you might have a huge raft of factors to consider which might influence the test. This testing for that single marker could well have an entire book’s worth of information documenting the processes, potentially influencing factors, tolerances, and so forth. This might include reams of data on testing in order to back up why this test and process can be used reliably, in mass-processing, and what the tolerance levels are for validity, trust and reliability.
The problem is that digital forensics tools are almost never testing for one value or marker in a data set that rarely changes in structure.
Instead we are often testing for a huge number of values, markers or structures, using logic which is often the best guess of a programmer based on known information. The source code of the program generating the value/marker/structure may not be available. We might be testing against a data set that continually changes in structure, and then have to try to interpret and present the results in an intelligible form!
On top of that, these values will regularly change as the originating programs are updated, hardware is updated, firmware is updated, and so on and so forth. Let alone any “cross-talk” from other programs or applications that might use or modify the generated structures subsequent to their creation.
You can be almost certain that any tool that is being used under ISO17025 certification right now has flaws in it that will not be detected by the limited testing people will do on it, simply to acquire the certification. Proponents of ISO17025 would say that finding some flaws is better than finding none, and to an extent that’s right, however in the real world there is a cost in doing this.
This cost is threefold: time, money, and potentially accuracy. The time and money aspects are relatively obvious, but the expense of accuracy might come because it leads to tools not being used until they are verified. That verification will likely not come immediately, and if there’s work that does need to be done immediately, and cannot wait for verification, then it could result in an older version of a tool being used. This version might contain bugs that a later version fixes, but if the latest version hasn’t been verified then the examiner would be forced to use the older version. Of course, it’s always possible that a new version of a tool introduces its own errors, but generally speaking it would seem logical that a more recent version of a tool is going to have fixed more problems than it has created. In my view it’s therefore a giant waste of time, effort, and money to generate the ridiculous mass of documentation ISO17025 requires, and get people into a rigid process-following mindset.
I would bet my house on the fact that the limited ISO17025 testing that every lab supposedly following it will do, which costs huge amounts of time and money, would still find very few errors, if any; and if it does, most would be the sort of obvious error/failure any vaguely competent examiner would have spotted anyway. Ultimately the field of digital forensics is so vast and complex that no results from a tool should be treated as if they are 100% reliable.
Instead of trying to try to prove that something inherently unreliable is reliable and trustworthy, the focus should be on how you could really have a better degree of confidence in any evidence produced.
In the court case context, the best chance of spotting deficiencies in digital evidence produced by the prosecution is by a defence examiner reviewing the work (and vice versa). Therefore cutting down on legal aid and the time a defence examiner might get to assist a client, whilst ramping up things like ISO17025, is either mad or negligent.
The only benefit will be potentially to save government money at the expense of people in the justice system whilst punishing small companies and individual experts without massive budgets, for whom the cost of ISO17025 is disproportionately exorbitant in comparison to the monetary value of work they do. I say ‘monetary value of the work they do’ because many of the finest experts in this field – and I would venture many fields – often run their own small businesses, and aren’t part of a large corporate entity that might hope to spread the exessive cost across huge volumes of work, and indeed win it purely on the basis of ISO17025 certification, ahead of others who can’t realistically afford it or justify the cost.
Regardless of ISO17025, this field is always liable to be significantly at the mercy of an individual examiner’s skill and experience, and equally importantly the time they’re able to work on a job, or limited process they’re under instruction to follow.
For a long time there’s been a “race to the bottom” in digital forensics, and many perfectly competent examiners will be caught between the desire to investigate a case thoroughly and their employer wanting to turn a profit, perhaps limiting their time allowed to investigate the job, limiting the scope of the job, forcing them to adhere to a strict process for the purposes of ISO17025 or templated sales documents/contracts, and so on.
To use an example: certain jobs where possession of material is illegal might well have processes applied to the digital data to identify the illegal material, whether manual or automated, and then report the results, along with associated details of the systems etc. Of course we’d all want the who/what/when/where/why/how detailed in the report as much as possible, but bluntly this does not always happen (and forensic examiners simply do not get always unlimited time to investigate each item – I’d argue it’s not uncommon that they don’t get sufficient time to take a significantly well-rounded look at the case that a criminal matter might justify).
In my own personal experience I’ve had cases where it’s taken little more than an hour of reading the case papers and looking at a forensic image to be confident there’s no case to answer for a defendant (and seeing the case subsequently thrown out within minutes of the court date starting). This wasn’t because the original examiner did anything “wrong” or “in bad faith”. However they simply reported the presence of material, some supporting equipment/OS details, and the case progressed all the way to attendance at trial. At great expense to the state, no doubt, and undoubtedly distress to the accused.
To try to give an analogy in a fictitious case: let’s say someone’s murdered their wife by poisoning her.
The prosecution examiner gets given a couple of days to examine the computer and report their findings (not much time, I know – but don’t be surprised how little time might get spent on a job these days!) They process the computer in various tools, and after initial reviews don’t find anything of particular relevance, but then come across an ebook named “How To Murder My Wife” and another called “A History of Chemicals and Poisons”, both in the ebooks folder of the user directory for the main suspect.
So, this obviously gets flagged up immediately, and reported upon. It comes to court, this is presented, and along with other weaker circumstantial evidence from the prosecution, the individual is convicted, with (for the sake of argument) no defence expert allocated to examine the computer, and the suspect denying knowledge of these books or of having read them. Had that examiner been given slightly longer to examine the case, they might have presented different findings.
Upon appeal the suspect gets a defence examiner, who confirms the presence of the relevant ebooks, and confirms that they were in the ebooks folder, along with 10,000 other ebooks, many of which had likely been extracted from a single zip file, downloaded via a torrent program. The link to the torrent is identified in an email from one of their friends, saying there was a good book in there about car repair, and that they should have a look. The book about car repair is one of the few ebooks that had been more recently accessed and had registry evidence of being viewed. The poison and murder books had no such evidence. In light of this there’s a retrial and the accused is subsequently acquitted.
I am absolutely certain that if people do not get defence experts when they’re faced with computer-based evidence, then there WILL be miscarriages of justice. Or if defence experts carry out examinations to the letter of their request, in order to stay within their allocated time, the same issue will arise. Miscarriages of justice, or “close calls”, can easily occur, not just through bad intentions but also by simply not being allocated enough time to do a sufficiently thorough job.
Spending more money on things like ISO certification while there is immense pressure on forensic companies, forensic units, and legal aid, is like spending £100,000 on a sticker saying “seaworthy” for your boat because you’ve had someone verify that you’ve got a written process that you follow for maintenance, whilst the boat has a big hole in the side and is taking on water.
Obviously, getting an opposing expert is not a solution to all problems with digital evidence, and is only an extra layer of safety/verification, however I’d say it’s an infinitely more important one than ISO17025 will ever be (certainly by a factor of 10 or 100 times more important). As is providing sufficient time for an examiner to complete a competent job, rather than a brief tick-box job.
If they really do want to continue the obsession with testing tools, then they should set up a central body to verify forensic tools. They should then report on issues they identify publicly, so that everyone is aware, and can compensate for this whilst waiting for a manufacturer fix, or working around the issue.
I see little value in the rigid process documentation, or setting out decision trees a mile long, all which would have the end point of essentially trying to comply with the ACPO principles and produce best evidence with the least modification possible, whilst being documented.
They should scrap ISO17025 for digital forensics. It’s not fit for purpose, and simply will never achieve any meaningful degree of improvement in the reliability of evidence in this field. If they really want to prevent miscarriages of justice, then there are tangible things they could do to improve this rather than a pointless, expensive certification that hardly anyone in the field has any faith in.
What do you think? Do you agree with Rich2005 that strictly adhering to standards such as ISO17025 could lead to miscarriages of justice? Share your thoughts in the comments below, or if you’d like to submit your own opinion piece, you can email it to [email protected].
The views contained within this article are the opinions of its author and do not necessarily reflect the views of Forensic Focus.