ISO 17025 for Digit...
 
Notifications
Clear all

ISO 17025 for Digital Forensics – Yay or Nay?

Page 2 / 9
Merriora
(@merriora)
Junior Member

Computer forensic labs are a odd mixture of investigation on one hand, and question-answering on the other. Much more investigation, much less scientific fact-finding.

In the general area of investigations, I don't think standardization is of any use. But the narrow questions, such as 'was this file modified on <date> <time>? by whom? With what result?' could and should be subject to standardization.

I think this is a very important distinction in the type of work we complete within our field. Often we are not giving expert opinions within our report on if XYZ occurred, but rather providing information on what we observed on the device which can then be used to compare to other information know about the incident. (Investigative)

Example
- Here is a list of all calls and SMS messages obtained from the device compared to the Call Detail Records (CDRs)

VS.

In cases where an expert opinion is required..

Examples

- Was this picture taken with this phone on <date>?
- Was Joe using this device on <date/time>?

In the latter examples, further standards should be developed both concentrating on the experience of the investigator and the tools/methods* used to come to that determination.

In this situation, further testing should be done and expected to be done by the courts to be able to provide that expert opinion for this particular question if its essential the case.

ReplyQuote
Topic starter Posted : 25/01/2018 5:03 pm
bshavers
(@bshavers)
Active Member

My apprehension in the accreditation debate of digital forensics "labs" is that most of the standards proposed do not apply to the DFIR field and therefore will negatively disrupt it.

A digital forensics "lab" is many times just a laptop connected to an external hard drive that contains a forensic image of electronic data that can be used for examination in virtually any location on the planet (or off the planet).

The forensic work is interpreting the data. Preserved data does not spoil or rot and is not affected by an analysis. For scientific analysis as intended by various ISOs, there is no element on the table of elements that can be compared to electronic data as the testing of any element will result in a change, or be altered or modified, or even be destroyed by a lab analysis. All elements, even without an analysis are affected by environmental conditions including the passage of time, some more so than others.

Electronic data is not an element. It can be preserved, perfectly duplicated, and tested (interpreted) forever without alteration. The environment does not affect data. The testing does not affect the data. The passage of time does not affect the data. Storage media may fail, but the data can be preserved onto new media forever. There is practically no difference between reading a book and examining a forensic image. Once preserved, the information/data is unchanging when reading/interpreting. This cannot be said of any element on the table of elements.

The focus should be on training and education standards for the examiner and processes for collection of electronic evidence, whether derived from modified ISO standards and/or commonly used methods used by the community.

Today, technology is a moving target. Tomorrow, it may be out of reach if we restrict our work by implying that the mere interpretation of data from a forensic image requires the same environmental standards as conducting an autopsy on a human body or on a single drop of blood.

ReplyQuote
Posted : 25/01/2018 6:42 pm
thefuf
(@thefuf)
Active Member

The forensic work is interpreting the data. Preserved data does not spoil or rot and is not affected by an analysis.

The forensic work also includes such a significant element as data acquisition. It's easy to say "preserved data", it's not so easy to preserve the data during the acquisition. If data is preserved, then yes, data interpretation errors can be resolved by examining this data again, although such errors can remain invisible in a particular case (still, there are legal ways to reduce the risk of unnoticed data interpretation errors). When data is not preserved (during its acquisition or at a later time), a number of obvious issues may arise. Moreover, sometimes forensic examiners have to prove that data was actually preserved as expected (and there should be an easy way to do this).

Currently, forensic examiners are blindly attaching a magic box which makes the acquisition process forensically sound (this box is called a hardware write blocker) and courts are accepting this method. But this is so much wrong! We need better validation methods and standards for hardware write blockers and other tools. We need a disclosure standard for vendors of forensic software/hardware. The acquisition process is crucial, so critical issues with basic tools like write blockers should be publicly discussed, because the "examine the data again" approach won't always work if original data isn't intact.

ReplyQuote
Posted : 26/01/2018 6:12 am
bshavers
(@bshavers)
Active Member

I neglected the acquisition aspect since it is impossible to require all or even some acquisitions to occur in an ISO certified lab environment. Many acquisitions are conducted onsite by virtue of the systems or limited time allowed to acquire. If there is ever a requirement to have lab-only acquisitions, you can imagine the negative impact that will have on forensics.

ReplyQuote
Posted : 26/01/2018 6:25 am
jaclaz
(@jaclaz)
Community Legend

Not being a professional in the field, I am allowed to say that - while of course data should not be changed at a whim - the current fixation on "total integrity" is mainly fluff that the industry of write blockers happily promotes and that risks to have forensic examiners - obsessed by this particular (largely) non-issue to focus on this aspect and leave unexplored or mis-explored other parts of the evidence.

Previous related discussion
https://www.forensicfocus.com/Forums/viewtopic/t=11739/postdays=0/postorder=asc/start=5/

Anyway there is not one reason in the world to have a hardware write blocker (let alone trusting it blindly) the fact that noone has put together a basic OS, open source and fully documented that runs (at a decent speed) on something inexpensive like a Pi or any given "standard" board (possibly with a processor that has NOT "speculative execution" wink ), and that is verified/certified by members the international forensics community should be proof enough that there is no actual consensus on this very basic aspect, there is simply no chance in any foreseeable future to have any senceful standard/procedure.

The good news about the forcing down the throat of the good UK forensicators the ISO 17025 norm could be the occasion to have them (and those from other countries, scared to death by the possibility that the same will happen to them before o later) to actually put their act together and propose (better) alternatives.

jaclaz

ReplyQuote
Posted : 26/01/2018 10:20 am
thefuf
(@thefuf)
Active Member

I neglected the acquisition aspect since it is impossible to require all or even some acquisitions to occur in an ISO certified lab environment. Many acquisitions are conducted onsite by virtue of the systems or limited time allowed to acquire. If there is ever a requirement to have lab-only acquisitions, you can imagine the negative impact that will have on forensics.

This aspect could be neglected if forensic labs weren't doing data acquisitions (within a lab) at all and if the integrity of digital evidence wasn't the issue. Also, there could be a better solution than ISO 17025 for the acquisition phase, so on-site data acquisitions can be covered as well.

ReplyQuote
Posted : 26/01/2018 10:39 am
thefuf
(@thefuf)
Active Member

Not being a professional in the field, I am allowed to say that - while of course data should not be changed at a whim - the current fixation on "total integrity" is mainly fluff that the industry of write blockers happily promotes and that risks to have forensic examiners - obsessed by this particular (largely) non-issue to focus on this aspect and leave unexplored or mis-explored other parts of the evidence.

There was a malware investigation with two suspect drives. One drive had important evidence located in the $LogFile (NTFS), while another one had this file wiped (entirely). This kind of data modification was observed (under specific conditions) when booting many live forensic distributions based on Ubuntu, including those validated by NIST (with no such issues recorded in the reports).

Should we tolerate this behavior? No, because this is a real issue with an obviously significant impact. Do vendors document such issues? In general, no. Do vendors fix such issues? Sometimes (also, there was a case when a vendor silently introduced a fix, but later removed it, thus bringing the issues back, see my paper here).

Anyway there is not one reason in the world to have a hardware write blocker (let alone trusting it blindly) the fact that noone has put together a basic OS, open source and fully documented that runs (at a decent speed) on something inexpensive like a Pi or any given "standard" board (possibly with a processor that has NOT "speculative execution" wink ), and that is verified/certified by members the international forensics community should be proof enough that there is no actual consensus on this very basic aspect, there is simply no chance in any foreseeable future to have any senceful standard/procedure.

In my opinion, many forensic examiners are obsessed with claims made by vendors and entities like NIST.

ReplyQuote
Posted : 26/01/2018 12:45 pm
jaclaz
(@jaclaz)
Community Legend

@TheFuf
a little bit off topic, but I believe - from what you reported elsewhere and from the contents of your article - that the issues with those Linux distro's are related EXCLUSIVELY to booting the Linux with the evidence drive connected, the same behaviour cannot be replicated if the evidence drive is hot-connected (via USB or SATA/eSATA).

We are saying the exact same things, however, there is no guarantee that a hardware write blocker is in any way "read only" or at least not "more read only" that a proper "soft blocked OS".

The perplexing issues remain
1) BOTH the public institutions (NIST in this case) and privates (Paladin or - as we discussed it elsewhere Passmark Forensics) can commit mistakes and overlook possible issues [1]
2) there are no reasons to believe that hardware write blocker manufacturers/vendors are in any way exempt from these same (or other) mistakes [1]
3) there is no active interest from the digital forensics community (at large/generically speaking) into fixing the situation/contribute to a solution

All in all it seems to me like the hardware write blockers manufacturers are held (well paid for that role BTW) as scapegoats in case something goes wrong and the whole thing is widely deemed as being an S.E.P.

https://en.wikipedia.org/wiki/Somebody_else%27s_problem

The same ol' "the Devil made me do it, officer" excuse wink .

jaclaz

[1] a bug is possible in *anything* complex enough

ReplyQuote
Posted : 26/01/2018 2:15 pm
Merriora
(@merriora)
Junior Member

There was a malware investigation with two suspect drives. One drive had important evidence located in the $LogFile (NTFS), while another one had this file wiped (entirely). This kind of data modification was observed (under specific conditions) when booting many live forensic distributions based on Ubuntu, including those validated by NIST (with no such issues recorded in the reports).

Should we tolerate this behavior? No, because this is a real issue with an obviously significant impact. Do vendors document such issues? In general, no. Do vendors fix such issues? Sometimes (also, there was a case when a vendor silently introduced a fix, but later removed it, thus bringing the issues back, see my paper here).

This is an area where I would like to see further standards both with software vendors and labs.

When discussing standards and especially ISO 17025, I've heard the concern about having to test every new software release which would obviously be very time intensive and difficult to complete.

For vendors, I would like to see them being upfront and very clear about any bugs or issues they discover within their software. Information and test data should be provided openly so that labs have a clear understanding of how and when the 'bug' affects data.

It would then be up to the lab to ensure no previously released Digital Forensics Reports were affected by that bug. If the bug did affect the data, then a supplementary report should be written to clearly identify how the data was affected. Since validation and testing of data should already be completed on every examination, most bugs will have caused insignificant issues if any issues at all.

I think the only way to ensure that the above takes place is for Lab Standards to be in place that requires these steps. Individuals can't be responsible as they will often come and go from a lab in the years it takes many files to get to court.

The reality is that 'bugs' will always be a part of the software, including forensic software. If we are proactive in re-analyzing the reports that may have been affected when the bug is discovered, this will show that the tools we use are effective and can be trusted as data is validated and procedures are in place to catch potential issues.

ReplyQuote
Topic starter Posted : 26/01/2018 3:09 pm
bshavers
(@bshavers)
Active Member

I neglected the acquisition aspect since it is impossible to require all or even some acquisitions to occur in an ISO certified lab environment. Many acquisitions are conducted onsite by virtue of the systems or limited time allowed to acquire. If there is ever a requirement to have lab-only acquisitions, you can imagine the negative impact that will have on forensics.

This aspect could be neglected if forensic labs weren't doing data acquisitions (within a lab) at all and if the integrity of digital evidence wasn't the issue. Also, there could be a better solution than ISO 17025 for the acquisition phase, so on-site data acquisitions can be covered as well.

Acquisitions conducted outside the lab have too unanticipated factors to cover with a blanket standard. In both civil and criminal cases, there are restrictions as to (1) what you can acquire, (2) how much you can acquire, (3) when you can acquire, and (4) what you can take with you. Many cases today have the analysis done onsite of the victim's business. Being impractical to acquire terabytes of data, or only allowed to look at email within the four walls of a business prevents a lab being involved in both acquisition and analysis.

As onsite response generally requires on-the-spot decision making, innovative and untested responses to handle new threats, violating a lab policy will be the norm. In these incidents, the "lab" is the victim's business which certainly will not meet any standard as a certified lab.

ReplyQuote
Posted : 26/01/2018 9:04 pm
thefuf
(@thefuf)
Active Member

Acquisitions conducted outside the lab have too unanticipated factors to cover with a blanket standard.

I disagree, because at least some basic rules with exceptions can be defined. If a forensic examiner wants to boot a suspect system directly and he/she has right/permission to do so, that's okay. But that's not okay when a hardware write blocker is writing to a suspect drive (and it doesn't matter whether this happens in a lab or on site).

The rules I'm talking about don't necessary restrict a forensic examiner, these rules may also target vendors of forensic tools (even to a higher degree).

In both civil and criminal cases, there are restrictions as to (1) what you can acquire, (2) how much you can acquire, (3) when you can acquire, and (4) what you can take with you. Many cases today have the analysis done onsite of the victim's business. Being impractical to acquire terabytes of data, or only allowed to look at email within the four walls of a business prevents a lab being involved in both acquisition and analysis.

As onsite response generally requires on-the-spot decision making, innovative and untested responses to handle new threats, violating a lab policy will be the norm. In these incidents, the "lab" is the victim's business which certainly will not meet any standard as a certified lab.

Since forensic examiners use tools (hardware and software ones) to access and interpret data, we should distinguish actions taken by a person using a tool from actions performed by a tool without person's knowledge. If a forensic examiner wants to do something that will violate the integrity of digital evidence (and this doesn't necessarily mean that the evidence becomes inadmissible), then he/she is responsible for that action and its consequences (legal and technical). But what if a tool did something that violated the integrity of digital evidence and a forensic examiner wasn't aware of this? Typically, we would blame the examiner. But given that some vendors attempt to hide issues with the forensic soundness of their tools (for example, deploy fixes silently), that there is no way for an average lab to validate a complex data acquisition tool or a write blocker ("write known data - do something - compare", "hash - do something - hash again", and similar black-box approaches don't work well), finding the answer to this question isn't as easy as it seems. And this is why, in my opinion, there should be a standard.

ReplyQuote
Posted : 26/01/2018 11:32 pm
thefuf
(@thefuf)
Active Member

a little bit off topic, but I believe - from what you reported elsewhere and from the contents of your article - that the issues with those Linux distro's are related EXCLUSIVELY to booting the Linux with the evidence drive connected, the same behaviour cannot be replicated if the evidence drive is hot-connected (via USB or SATA/eSATA).

The activation of LVM volumes can be performed by an udev rule when a suspect drive is connected to a system after the boot. This is just an example (not related to the $LogFile issue), because you used the word "EXCLUSIVELY". -)

ReplyQuote
Posted : 26/01/2018 11:40 pm
thefuf
(@thefuf)
Active Member

When discussing standards and especially ISO 17025, I've heard the concern about having to test every new software release which would obviously be very time intensive and difficult to complete.

Side note one vendor didn't bump the version number after patching a reported issue with the forensic soundness.

It would then be up to the lab to ensure no previously released Digital Forensics Reports were affected by that bug. If the bug did affect the data, then a supplementary report should be written to clearly identify how the data was affected. Since validation and testing of data should already be completed on every examination, most bugs will have caused insignificant issues if any issues at all.

Not all modifications made to original data can be identified after the fact. For example, if an acquisition tool did sync a software RAID set, then there could be no way to tell whether or not the array was out of sync before a forensic examiner ran the tool. So, in some cases you don't know for sure whether the data was affected by an issue or not.

ReplyQuote
Posted : 26/01/2018 11:58 pm
pbeardmore
(@pbeardmore)
Active Member

Sometimes, I wonder if the industry itself is partly to blame. The language that we choose to use around the industry is broadly in line with previously existing forms of forensics. By aligning digital forensics so closely to other forms of forensics, we have been "lumped in" to other lab based forensic practices whilst, in "the real World", we are very very different and really require our own standard.

The debate concerning mandatory standards pre-supposes that there is a relevant and meaningful standard available to apply (with quaified people to oversea the process).

How many of us, when we go to work, put on the white coat, gloves and enter the "calibration laboratory". Not me, I work in an office and I think most of us do. I know this sounds simplistic and I know it's open to debate but its also simplistic to assume that computer forensics has enough in common with traditional forensics to use the same set of standards.

The challenges (both short and long term) are just so so different. End of moan…for now.

PS ever since using 17025 was first mooted by the regulator years ago, I dont think I have met one colleague within the industry who has said "yay, 17025, great idea". Almost the exact opposite. So do I trust the mass consensus within the industry (including many voices that I have huge respect for) or do I trust the regulator?

ReplyQuote
Posted : 27/01/2018 1:03 am
athulin
(@athulin)
Community Legend

When discussing standards and especially ISO 17025, I've heard the concern about having to test every new software release which would obviously be very time intensive and difficult to complete.

For the individual 'lab', absolutely. But as long as the test concerns something that is not lab-specific, it is technically acceptable to make one test, and share the results. A bit like what NIST does with their tests.

The absence of such activity suggests a field of forensic activity that isn't so much a field as a crowd of individuals.

Any CF interest organizations that are involved in doing or coordinating such tests for their members? I'd probably join such an organization without having to think too much about it …

The SWGDE is fairly close (https://www.swgde.org/). Their 'Framework of a Quality Management System for … Forensic Science Service Providers' is an interesting contrast to ISO 17025, as it explicitly targets digital forensics 'labs'.

ReplyQuote
Posted : 27/01/2018 7:50 am
Page 2 / 9
Share: