ISO 17025 for Digit...
 
Notifications
Clear all

ISO 17025 for Digital Forensics – Yay or Nay?

126 Posts
18 Users
0 Likes
9,197 Views
(@thefuf)
Posts: 262
Reputable Member
 

I neglected the acquisition aspect since it is impossible to require all or even some acquisitions to occur in an ISO certified lab environment. Many acquisitions are conducted onsite by virtue of the systems or limited time allowed to acquire. If there is ever a requirement to have lab-only acquisitions, you can imagine the negative impact that will have on forensics.

This aspect could be neglected if forensic labs weren't doing data acquisitions (within a lab) at all and if the integrity of digital evidence wasn't the issue. Also, there could be a better solution than ISO 17025 for the acquisition phase, so on-site data acquisitions can be covered as well.

 
Posted : 26/01/2018 10:39 am
(@thefuf)
Posts: 262
Reputable Member
 

Not being a professional in the field, I am allowed to say that - while of course data should not be changed at a whim - the current fixation on "total integrity" is mainly fluff that the industry of write blockers happily promotes and that risks to have forensic examiners - obsessed by this particular (largely) non-issue to focus on this aspect and leave unexplored or mis-explored other parts of the evidence.

There was a malware investigation with two suspect drives. One drive had important evidence located in the $LogFile (NTFS), while another one had this file wiped (entirely). This kind of data modification was observed (under specific conditions) when booting many live forensic distributions based on Ubuntu, including those validated by NIST (with no such issues recorded in the reports).

Should we tolerate this behavior? No, because this is a real issue with an obviously significant impact. Do vendors document such issues? In general, no. Do vendors fix such issues? Sometimes (also, there was a case when a vendor silently introduced a fix, but later removed it, thus bringing the issues back, see my paper here).

Anyway there is not one reason in the world to have a hardware write blocker (let alone trusting it blindly) the fact that noone has put together a basic OS, open source and fully documented that runs (at a decent speed) on something inexpensive like a Pi or any given "standard" board (possibly with a processor that has NOT "speculative execution" wink ), and that is verified/certified by members the international forensics community should be proof enough that there is no actual consensus on this very basic aspect, there is simply no chance in any foreseeable future to have any senceful standard/procedure.

In my opinion, many forensic examiners are obsessed with claims made by vendors and entities like NIST.

 
Posted : 26/01/2018 12:45 pm
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
 

@TheFuf
a little bit off topic, but I believe - from what you reported elsewhere and from the contents of your article - that the issues with those Linux distro's are related EXCLUSIVELY to booting the Linux with the evidence drive connected, the same behaviour cannot be replicated if the evidence drive is hot-connected (via USB or SATA/eSATA).

We are saying the exact same things, however, there is no guarantee that a hardware write blocker is in any way "read only" or at least not "more read only" that a proper "soft blocked OS".

The perplexing issues remain
1) BOTH the public institutions (NIST in this case) and privates (Paladin or - as we discussed it elsewhere Passmark Forensics) can commit mistakes and overlook possible issues [1]
2) there are no reasons to believe that hardware write blocker manufacturers/vendors are in any way exempt from these same (or other) mistakes [1]
3) there is no active interest from the digital forensics community (at large/generically speaking) into fixing the situation/contribute to a solution

All in all it seems to me like the hardware write blockers manufacturers are held (well paid for that role BTW) as scapegoats in case something goes wrong and the whole thing is widely deemed as being an S.E.P.

https://en.wikipedia.org/wiki/Somebody_else%27s_problem

The same ol' "the Devil made me do it, officer" excuse wink .

jaclaz

[1] a bug is possible in *anything* complex enough

 
Posted : 26/01/2018 2:15 pm
(@merriora)
Posts: 44
Eminent Member
Topic starter
 

There was a malware investigation with two suspect drives. One drive had important evidence located in the $LogFile (NTFS), while another one had this file wiped (entirely). This kind of data modification was observed (under specific conditions) when booting many live forensic distributions based on Ubuntu, including those validated by NIST (with no such issues recorded in the reports).

Should we tolerate this behavior? No, because this is a real issue with an obviously significant impact. Do vendors document such issues? In general, no. Do vendors fix such issues? Sometimes (also, there was a case when a vendor silently introduced a fix, but later removed it, thus bringing the issues back, see my paper here).

This is an area where I would like to see further standards both with software vendors and labs.

When discussing standards and especially ISO 17025, I've heard the concern about having to test every new software release which would obviously be very time intensive and difficult to complete.

For vendors, I would like to see them being upfront and very clear about any bugs or issues they discover within their software. Information and test data should be provided openly so that labs have a clear understanding of how and when the 'bug' affects data.

It would then be up to the lab to ensure no previously released Digital Forensics Reports were affected by that bug. If the bug did affect the data, then a supplementary report should be written to clearly identify how the data was affected. Since validation and testing of data should already be completed on every examination, most bugs will have caused insignificant issues if any issues at all.

I think the only way to ensure that the above takes place is for Lab Standards to be in place that requires these steps. Individuals can't be responsible as they will often come and go from a lab in the years it takes many files to get to court.

The reality is that 'bugs' will always be a part of the software, including forensic software. If we are proactive in re-analyzing the reports that may have been affected when the bug is discovered, this will show that the tools we use are effective and can be trusted as data is validated and procedures are in place to catch potential issues.

 
Posted : 26/01/2018 3:09 pm
bshavers
(@bshavers)
Posts: 210
Estimable Member
 

I neglected the acquisition aspect since it is impossible to require all or even some acquisitions to occur in an ISO certified lab environment. Many acquisitions are conducted onsite by virtue of the systems or limited time allowed to acquire. If there is ever a requirement to have lab-only acquisitions, you can imagine the negative impact that will have on forensics.

This aspect could be neglected if forensic labs weren't doing data acquisitions (within a lab) at all and if the integrity of digital evidence wasn't the issue. Also, there could be a better solution than ISO 17025 for the acquisition phase, so on-site data acquisitions can be covered as well.

Acquisitions conducted outside the lab have too unanticipated factors to cover with a blanket standard. In both civil and criminal cases, there are restrictions as to (1) what you can acquire, (2) how much you can acquire, (3) when you can acquire, and (4) what you can take with you. Many cases today have the analysis done onsite of the victim's business. Being impractical to acquire terabytes of data, or only allowed to look at email within the four walls of a business prevents a lab being involved in both acquisition and analysis.

As onsite response generally requires on-the-spot decision making, innovative and untested responses to handle new threats, violating a lab policy will be the norm. In these incidents, the "lab" is the victim's business which certainly will not meet any standard as a certified lab.

 
Posted : 26/01/2018 9:04 pm
(@thefuf)
Posts: 262
Reputable Member
 

Acquisitions conducted outside the lab have too unanticipated factors to cover with a blanket standard.

I disagree, because at least some basic rules with exceptions can be defined. If a forensic examiner wants to boot a suspect system directly and he/she has right/permission to do so, that's okay. But that's not okay when a hardware write blocker is writing to a suspect drive (and it doesn't matter whether this happens in a lab or on site).

The rules I'm talking about don't necessary restrict a forensic examiner, these rules may also target vendors of forensic tools (even to a higher degree).

In both civil and criminal cases, there are restrictions as to (1) what you can acquire, (2) how much you can acquire, (3) when you can acquire, and (4) what you can take with you. Many cases today have the analysis done onsite of the victim's business. Being impractical to acquire terabytes of data, or only allowed to look at email within the four walls of a business prevents a lab being involved in both acquisition and analysis.

As onsite response generally requires on-the-spot decision making, innovative and untested responses to handle new threats, violating a lab policy will be the norm. In these incidents, the "lab" is the victim's business which certainly will not meet any standard as a certified lab.

Since forensic examiners use tools (hardware and software ones) to access and interpret data, we should distinguish actions taken by a person using a tool from actions performed by a tool without person's knowledge. If a forensic examiner wants to do something that will violate the integrity of digital evidence (and this doesn't necessarily mean that the evidence becomes inadmissible), then he/she is responsible for that action and its consequences (legal and technical). But what if a tool did something that violated the integrity of digital evidence and a forensic examiner wasn't aware of this? Typically, we would blame the examiner. But given that some vendors attempt to hide issues with the forensic soundness of their tools (for example, deploy fixes silently), that there is no way for an average lab to validate a complex data acquisition tool or a write blocker ("write known data - do something - compare", "hash - do something - hash again", and similar black-box approaches don't work well), finding the answer to this question isn't as easy as it seems. And this is why, in my opinion, there should be a standard.

 
Posted : 26/01/2018 11:32 pm
(@thefuf)
Posts: 262
Reputable Member
 

a little bit off topic, but I believe - from what you reported elsewhere and from the contents of your article - that the issues with those Linux distro's are related EXCLUSIVELY to booting the Linux with the evidence drive connected, the same behaviour cannot be replicated if the evidence drive is hot-connected (via USB or SATA/eSATA).

The activation of LVM volumes can be performed by an udev rule when a suspect drive is connected to a system after the boot. This is just an example (not related to the $LogFile issue), because you used the word "EXCLUSIVELY". -)

 
Posted : 26/01/2018 11:40 pm
(@thefuf)
Posts: 262
Reputable Member
 

When discussing standards and especially ISO 17025, I've heard the concern about having to test every new software release which would obviously be very time intensive and difficult to complete.

Side note one vendor didn't bump the version number after patching a reported issue with the forensic soundness.

It would then be up to the lab to ensure no previously released Digital Forensics Reports were affected by that bug. If the bug did affect the data, then a supplementary report should be written to clearly identify how the data was affected. Since validation and testing of data should already be completed on every examination, most bugs will have caused insignificant issues if any issues at all.

Not all modifications made to original data can be identified after the fact. For example, if an acquisition tool did sync a software RAID set, then there could be no way to tell whether or not the array was out of sync before a forensic examiner ran the tool. So, in some cases you don't know for sure whether the data was affected by an issue or not.

 
Posted : 26/01/2018 11:58 pm
(@pbeardmore)
Posts: 289
Reputable Member
 

Sometimes, I wonder if the industry itself is partly to blame. The language that we choose to use around the industry is broadly in line with previously existing forms of forensics. By aligning digital forensics so closely to other forms of forensics, we have been "lumped in" to other lab based forensic practices whilst, in "the real World", we are very very different and really require our own standard.

The debate concerning mandatory standards pre-supposes that there is a relevant and meaningful standard available to apply (with quaified people to oversea the process).

How many of us, when we go to work, put on the white coat, gloves and enter the "calibration laboratory". Not me, I work in an office and I think most of us do. I know this sounds simplistic and I know it's open to debate but its also simplistic to assume that computer forensics has enough in common with traditional forensics to use the same set of standards.

The challenges (both short and long term) are just so so different. End of moan…for now.

PS ever since using 17025 was first mooted by the regulator years ago, I dont think I have met one colleague within the industry who has said "yay, 17025, great idea". Almost the exact opposite. So do I trust the mass consensus within the industry (including many voices that I have huge respect for) or do I trust the regulator?

 
Posted : 27/01/2018 1:03 am
(@athulin)
Posts: 1156
Noble Member
 

When discussing standards and especially ISO 17025, I've heard the concern about having to test every new software release which would obviously be very time intensive and difficult to complete.

For the individual 'lab', absolutely. But as long as the test concerns something that is not lab-specific, it is technically acceptable to make one test, and share the results. A bit like what NIST does with their tests.

The absence of such activity suggests a field of forensic activity that isn't so much a field as a crowd of individuals.

Any CF interest organizations that are involved in doing or coordinating such tests for their members? I'd probably join such an organization without having to think too much about it …

The SWGDE is fairly close (https://www.swgde.org/). Their 'Framework of a Quality Management System for … Forensic Science Service Providers' is an interesting contrast to ISO 17025, as it explicitly targets digital forensics 'labs'.

 
Posted : 27/01/2018 7:50 am
Page 3 / 13
Share: