Join Us!

ACPO Principles Rev...
 
Notifications
Clear all

ACPO Principles Revised  

Page 1 / 5
  RSS
tootypeg
(@tootypeg)
Active Member

I thought I would post this as its something ive been working on and the exact issue has popped up in another recent thread by GumStick. I have long thought that despite ACPO not existing anymore we constantly will refer back to the 4 governing principles of digital evidence which I dont think suffer from being outdated. - This has also been echo'd by the posters in that thread.

I have been putting together a piece of work on this for a while now, but thought I would share what I think are my proposed set of new 8 principles for digital investigation which I think could now be more appropriate than the traditional 4.

I would be super interested in thoughts on this and hopefully it is also helpful in the other thread.


Principle 1 “Any course of investigatory action undertaken by the practitioner must first be agreed upon by an appropriate authority, who themselves must have full knowledge and insight into any agreed course of action in order to adopt responsibility for subsequent investigatory decision making.”

Principle 2- “A practitioner must understand those laws, policies and principles applicable to their given inquiry, which define the scope of their investigatory powers. Practitioners must evidence adherence to these, and operate within their confines at all times”.

Principle 3- “A practitioner should make all reasonable efforts to identify sources of potential evidence relevant to their investigation, taking into account the concepts of proportionality and necessity in regards to any device seized/interrogated. All justifiable measures must be taken to limit both collateral intrusion and any disruption caused by their investigation.”

Principle 4- “A practitioner should only access data on a digital device using a suitable method. Suitability is determined by the following
A known and accepted method, which has been subject to peer and field-wide review.
A developed novel method providing suitable testing and validation has been undertaken in order to verify its functionality.
In either case, the practitioner must understand the any methods used and be able to explain their function.”

Principle 5- “A practitioner should take all reasonable steps to preserve the integrity of any device(s) subject to investigation during the course of their examination.”

Principle 6- “Methods of access which compromise the initial state of digital data on a device must be utilised as a last resort. Where such methods are implemented, the implications of their use must be both understood and capable of explanation by the practitioner.”

Principle 7- “All extracted and interpreted data deemed to be ‘digital evidence’ must have undergone robust testing and validation using accepted testing methods and peer review in order to verify accuracy.”

Principle 8- “All stages of a practitioners investigation must be documented, forming an audit trail which can be used to describe those processes implemented by the practitioner to a third party, and where necessary and possible, allowing these procedures to be repeated in order to obtain comparable results.”

Quote
Posted : 05/11/2019 8:38 pm
thefuf
(@thefuf)
Active Member

Any course of investigatory action undertaken by the practitioner must first be agreed upon by an appropriate authority

If there is one.

A practitioner should only access data on a digital device using a suitable method

Not only on a digital device, but also in a system consisting of two or more digital devices (acting as a whole).

A known and accepted method, which has been subject to peer and field-wide review.
A developed novel method providing suitable testing and validation has been undertaken in order to verify its functionality.
In either case, the practitioner must understand the any methods used and be able to explain their function.

There are acquisition methods that
- are not properly documented, [and/or]
- are often misunderstood (or at least not well-understood) by practitioners who utilize these methods.

Some related threads as examples
https://www.forensicfocus.com/Forums/viewtopic/t=15155/
https://www.forensicfocus.com/Forums/viewtopic/t=15061/

TL;DR
Why do we need to insert a microSD card into a Samsung device during the bootloader acquisition with UFED? No, it's not used as a buffer. No, it's not used as a destination storage device. No, the phone doesn't boot from that card. No, Cellebrite doesn't provide an accurate explanation. But if you have some background in binary exploitation and you are not afraid of EULA violations, you can try to reverse engineer the process and get the answer.

So
A developed novel method providing suitable testing and/or validation has been undertaken in order to verify its functionality.

to preserve the integrity of any device(s) subject to investigation

And don't forget about the integrity of digital data stored on these devices.

Methods of access which compromise the initial state of digital data on a device must be utilised as a last resort. Where such methods are implemented, the implications of their use must be both understood and capable of explanation by the practitioner.

And reasonable actions should be taken to preserve and/or document digital data which may become inaccessible in the future, even when this data is not directly affected by methods in question.

Examples (mobile device forensics when there is no way to acquire a full image) expired entries in web browsing history, applications that refuse to run because an update is required after a specific date.

All extracted and interpreted data deemed to be ‘digital evidence’ must have undergone robust testing and validation using accepted testing methods and peer review in order to verify accuracy.

This is impossible. A simple question is it possible to properly validate a hard disk drive acquisition process/tool? In most cases, my answer is "no". Because labs usually don't engage in reverse engineering.

ReplyQuote
Posted : 05/11/2019 11:44 pm
Rich2005
(@rich2005)
Senior Member

I'm going to be critical here but please take it as constructive criticism as I'm not trying to shoot you down.

I think the beauty of the 4 ACPO principles was their brevity. In number, length, and lack of prescriptiveness (may have just invented a word there).

There's danger you're veering off into the same problem as ISO17025 with trying to pigeon hole everything unnecessarily.

I think the problem with point one is it's too definitive and prescriptive. The officer in charge or authority may often not "have full knowledge and insight" and be very non-technical. I'm also not sure "any" course of action should necessarily be agreed as, to the letter of that, you'd be forever going back to the OIC (or similar) for the tiniest thing you're doing.

The problem with point two is it's further setting up DF practitioners for a fall if something has changed without their knowledge. It also technically doesn't limit the implication to DF related laws and is saying they should understand all laws in their case. That's not necessary and someone wouldn't need to be an expert in tax law to produce reliable evidence for a financial investigation.

Principle three (the start) might not apply to many situations and be outside of the remit of an examiner. They might be dealing with one device as part of a larger investigation or be part of a larger team and the identification phase/responsibility falls to someone else more senior.

You'll be unsurprised to know I don't like principle 4 or principle 7. It's very ISO17025. I have big problems with the "lots of us use it so it's fine" logic or the "I've performed some limited testing so it's fine" logic. I think ISO17025 is a dangerous thing in DF for this reason, and it's the illusion of reliability, where it demonstrably doesn't exist, and the skill of the examiner will be far more important in how they identify and report their findings, check their findings where possible, spot problems or potential problems, give caveats where appropriate, etc.

Principle 8, being basically the same as ACPO principle 3, my thoughts are just the same as posted on the other thread (which sparked your post).

ReplyQuote
Posted : 06/11/2019 8:08 am
jaclaz
(@jaclaz)
Community Legend

I thought I would post this as its something ive been working on and the exact issue has popped up in another recent thread by GumStick. I have long thought that despite ACPO not existing anymore we constantly will refer back to the 4 governing principles of digital evidence which I dont think suffer from being outdated. - This has also been echo'd by the posters in that thread.

Only for the record, the thread is this one, started by member GumStickStorage
https://www.forensicfocus.com/Forums/viewtopic/t=18147/

And the posters that commented on the ACPO principles actually - including yourself (and for the very little that counts myself) has been actually critical of them (hence the *need* to have them revisited).

More specifically the original 4 ACPO principles are (IMHO) still very good in theory but fail in the practical application.

Your new set of 8 principles (while being as well good in theory) seem to me like falling in exactly the same issues in practice, let's call them NG ? (Next Generation).

Principle NG#1 -> Nice, please make a list of appropriate authorities you personally know that actually understand anything about digital forensics methods

Principle NG#2 -> Not very different from ACPO principle #2

Principle NG#3 -> Define "reasonable" and "justifiable", an while you are at it, define also "proportionality" and "necessity", though I am not really-really qualified for these comments 😯 , these seems to me like a good way to invite defense attorneys to a party.

Principle NG#4 -> Nice. please make a list of the major or most common forensics softwares currently in use that have detailed the methods they use and where these methods have been peer reviewed.

Principle NG#5 -> Sure, that is ACPO #1, but - again - there is the definition of "reasonable" missing.

Principle NG#6 -> This is the corollary to NG#5 above, but as said on the other thread, the practitioner should have understood and be capable of explaining the methods used anyway

Principle NG#7 -> It seems like essentially a repetition of (or a corollary to) NG#4 above?

Principle NG #8 -> Fine, this is ACPO #3, with a twist, who/why/when is determined that repeating is "necessary"?

In this set the concept of ACPO principle #4 (which is IMHO important) seems like missing?

jaclaz

ReplyQuote
Posted : 06/11/2019 8:48 am
tootypeg
(@tootypeg)
Active Member

Super interesting replies to this, all comments well man, I think its good to be able to pick holes in this and actually see any/all issues so everything is welcome!

Thefuf-

I see your point about an appropriate authority, I guess this is a catch all phrase that covers that person(s) who sanction an investigation. Surely in most cases, there will be such a person/body?

In regards to accessing digital data on a device - I guess phraseology should encompass non-local stored data - which I would still argue is on a device and therefore the terminology technically would stand?…maybe clutching at straws.

The acquisition method point is valid and overlaps with Rich2005. I guess we should understand everything that we do, but often accept that as its done by others its ok. Arguably its an unacceptable stance, but how do we solve this solution. We cant weaken a principle to suit the field because we currently dont do something? So surely the principle has to be that we must understand the process - where in reality how we achieve this is the problem to be addressed. But I see your point, good example.

This is impossible. A simple question is it possible to properly validate a hard disk drive acquisition process/tool? In most cases, my answer is "no". Because labs usually don't engage in reverse engineering.

Possibly, but should this mean that we shouldn't have it as a principle? I mean if we could do it effectively and efficiently we surely would do this validation - therefore I could argue that it should be a principle and the burden of achieving it be something we have to address?

I just want to say that hearing what I am typing I am not trying to cause us more issues, just trying to play devils advocate (correct phrase?!)

Rich2005-

Do you think the original 4 principles now are too generic/lacking in content that they no longer offer anything more than an anecdote? I dont think they arnt applicable, but Im thinking that things have shifted with privacy, quality and procedural issues now more strongly in play?

I think the problem with point one is it's too definitive and prescriptive. The officer in charge or authority may often not "have full knowledge and insight" and be very non-technical. I'm also not sure "any" course of action should necessarily be agreed as, to the letter of that, you'd be forever going back to the OIC (or similar) for the tiniest thing you're doing.

I hear your point here, maybe it needs moderating as a principle. I guess the essence of this is that before we make a decision, we should have permission and the permission granter should understand what they are granting permission for? In reality, it could be tough to implement, but is it also not sensible in some respects?

The problem with point two is it's further setting up DF practitioners for a fall if something has changed without their knowledge. It also technically doesn't limit the implication to DF related laws and is saying they should understand all laws in their case. That's not necessary and someone wouldn't need to be an expert in tax law to produce reliable evidence for a financial investigation.

Good point, I guess again language moderation. Maybe understanding of the investigatory laws that govern their actions rather than those of the laws of the suspect offence under investigation?

You'll be unsurprised to know I don't like principle 4 or principle 7. It's very ISO17025. I have big problems with the "lots of us use it so it's fine" logic or the "I've performed some limited testing so it's fine" logic. I think ISO17025 is a dangerous thing in DF for this reason, and it's the illusion of reliability, where it demonstrably doesn't exist, and the skill of the examiner will be far more important in how they identify and report their findings, check their findings where possible, spot problems or potential problems, give caveats where appropriate, etc.

lol I do agree. I think reliance on external testing and validation is risky. I dont like it. But I was curious to see what the reception of this stance may be. A principle that suggests the practitioner should self test/validate may be more appropriate and I prefer. In reality, as noted to Thefuf, it might not be practical - but should this stop it from being a principle?

ReplyQuote
Posted : 06/11/2019 9:10 am
Rich2005
(@rich2005)
Senior Member

Do you think the original 4 principles now are too generic/lacking in content that they no longer offer anything more than an anecdote? I dont think they arnt applicable, but Im thinking that things have shifted with privacy, quality and procedural issues now more strongly in play?

I think they're suitably generic (albeit you could advocate minor tweaks) and there's a far bigger issue that needs addressing in terms of forensic regulation and ISO which is a mess being forced upon us.

I hear your point here, maybe it needs moderating as a principle. I guess the essence of this is that before we make a decision, we should have permission and the permission granter should understand what they are granting permission for? In reality, it could be tough to implement, but is it also not sensible in some respects?

What you're trying to get at isn't wrong. However it's simply the real-world practicalities that need to be considered. I think the ACPO principle point about the OIC being in charge is essentially getting at the same thing.

Good point, I guess again language moderation. Maybe understanding of the investigatory laws that govern their actions rather than those of the laws of the suspect offence under investigation?

Yes, although I'd argue this is a slightly dangerous thing to be including, as I'd (on a separate topic) like to hear people's views on the collection of cloud data when on a warrant, for example. I've seen it done and I also know many people won't because they don't believe it's covered under the scope of the warrant (ie stored elsewhere). I think this is one of the many areas of DF that could be improved by having a central body tasked with improving the state of the field (whether testing tools, providing guidance on topics such as this, providing guidance on mobile phone seizure perhaps, etc). I think the biggest problem in DF is problems and rules being created for examiners but not enough focus on actually improving the quality of evidence and its collection (because ISO17025 most certainly doesn't do that - if anything the opposite).

lol I do agree. I think reliance on external testing and validation is risky. I dont like it. But I was curious to see what the reception of this stance may be. A principle that suggests the practitioner should self test/validate may be more appropriate and I prefer. In reality, as noted to Thefuf, it might not be practical - but should this stop it from being a principle?

My preference (as above and in other persons) is a central body that works on creating guidance, on laws, procedures, and helps with tool vetting. Rather than extra principles, or rules, that shift the focus (blame) on to the examiners, but don't solve the major problems or areas that could be improved within DF.

DF is such a massive field now, and probably more crucial than any other type of evidence, that it warrants serious funding. The single biggest risk in the field is, in my view, not rogue examiners or incompetence. It's lack of funding as a result the lack of time spent on cases whether inside law enforcement or not (on top of the lack of guidance, central tool testing, etc). You can add lack of training for all types of law enforcement officers (and barristers) into this too. There have been tiny steps to address this in both cases but it's generally been p*ssing in the wind to put it bluntly. However this has been the case for a long time and it's a governmental level issue that frankly they're just not aware enough of (and even if they were they probably wouldn't treat seriously enough to address).

ReplyQuote
Posted : 06/11/2019 9:42 am
jaclaz
(@jaclaz)
Community Legend

My preference (as above and in other persons) is a central body that works on creating guidance, on laws, procedures, and helps with tool vetting.

Which would in theory create the need of a sort of NIST (and its CFTT)
https://www.nist.gov/itl/ssd/software-quality-group/computer-forensics-tool-testing-program-cftt
and its
https://www.dhs.gov/science-and-technology/nist-cftt-reports
and
https://toolcatalog.nist.gov/

which - understandably - even when a test/report exists (and it is exhaustive), often is related to a previous version of the software.

Moreover, again understandably, there are quite a few tests for "disk imaging" (which is - or should be - something that *any* practitioner can validate himself/herself)
https://www.dhs.gov/publication/st-disk-imaging
and only a few (actually 2 FTK and Encase) for "Windows Registry Forensic Tool"
https://www.dhs.gov/publication/st-windows-registry-forensic-tool

Assuming that these latter are the "main" or "big enough" players in the field (and all in all Windows Registry parsing should be IMHO among the "easy" ones among the many needs of an investigations and even if the tests are about fairly recent versions) they are not "perfect"
FTK

  • The tool incorrectly reported a QWORD value.
  • The tool did not process hive files generated by hivex library.
  • The tool did not report several big-data values in a v1.5 hive file.

Encase

  • The tool was terminated without any notification when it processed a tree structure with a large number of levels (about 1 million) in an experimental hive file.
  • Long value names (16,383 bytes and more) were not reported.
  • The tool did not report UTF-16LE characters properly.
  • The tool did not identify unusual ASCII characters (between 0x04 and 0x0D) of key and value names.
  • The ‘Tree’ and ‘Table’ panes of the tool operated differently when showing ASCII and UTF-16LE characters

So, again, it is likely that in practice a similar approach is simply not doable or pretty much unuseful/incomplete.

Rather than extra principles, or rules, that shift the focus (blame) on to the examiners, but don't solve the major problems or areas that could be improved within DF.

Yep ) , and both the original ACPO principles and the NG revised version, if you look at them from some distance can be condensed in a single meta-principle 😯

Do the right thing.

wink

jaclaz

ReplyQuote
Posted : 06/11/2019 10:46 am
Rich2005
(@rich2005)
Senior Member

My preference (as above and in other persons) is a central body that works on creating guidance, on laws, procedures, and helps with tool vetting.

Which would in theory create the need of a sort of NIST (and its CFTT)
https://www.nist.gov/itl/ssd/software-quality-group/computer-forensics-tool-testing-program-cftt
and its
https://www.dhs.gov/science-and-technology/nist-cftt-reports
and
https://toolcatalog.nist.gov/

which - understandably - even when a test/report exists (and it is exhaustive), often is related to a previous version of the software.

Moreover, again understandably, there are quite a few tests for "disk imaging" (which is - or should be - something that *any* practitioner can validate himself/herself)
https://www.dhs.gov/publication/st-disk-imaging
and only a few (actually 2 FTK and Encase) for "Windows Registry Forensic Tool"
https://www.dhs.gov/publication/st-windows-registry-forensic-tool

Assuming that these latter are the "main" or "big enough" players in the field (and all in all Windows Registry parsing should be IMHO among the "easy" ones among the many needs of an investigations and even if the tests are about fairly recent versions) they are not "perfect"
FTK

  • The tool incorrectly reported a QWORD value.
  • The tool did not process hive files generated by hivex library.
  • The tool did not report several big-data values in a v1.5 hive file.

Encase

  • The tool was terminated without any notification when it processed a tree structure with a large number of levels (about 1 million) in an experimental hive file.
  • Long value names (16,383 bytes and more) were not reported.
  • The tool did not report UTF-16LE characters properly.
  • The tool did not identify unusual ASCII characters (between 0x04 and 0x0D) of key and value names.
  • The ‘Tree’ and ‘Table’ panes of the tool operated differently when showing ASCII and UTF-16LE characters

So, again, it is likely that in practice a similar approach is simply not doable or pretty much unuseful/incomplete.

I don't believe it's not do-able at all or unuseful.

I think you're missing the point as the intention isn't to get a tool that's validated as being perfect. That cannot possibly happen.

The point was that, instead of hundreds of labs running extremely limited tests, likely designed not to fail, with the sole aim of passing ISO17025, wasting a huge amount of collective time and money, some co-ordinated testing of major tools, as they're released, would be far more efficient. You could get the central body to run the kind of limited test that everyone would be doing to achieve ISO17025 and report the results for everyone rather than duplicating the work 100 fold (or more). On top of that have other staff who're performing more wide ranging testing, on current versions of software, and reporting monthly on their findings in a short report (as well as a note immediately on a website). With a remit not to provide stale reports on imaging but to provide public bug reporting of issues with current tools (perhaps even allow examiners to post their bugs too for the benefit of all - regardless of whether they've reported to the manufacturer - so others can be aware of a potential unverified problem).

Separately, have time and money allocated for some experienced forensic examiners and lawyers/barristers to periodically review current legal issues facing digital evidence (let's use the cloud collection example) and provide guidance on a six monthly basis (or urgently if something changes that everyone should know about). Perhaps even allow submissions (questions) from examiners about topics they're unsure of and the best ones could be addressed.

I don't know how well funded that NIST department is but I imagine they're not massively resourced and I think these kind of things should have significant amounts of money spent on them (considering how important digital evidence is in the majority of cases now……if the work has even been done at all and not screened out). A few million quid is absolutely nothing in the grand scheme of things (and frankly the time wasted by every company and lab performing ISO17025 will absolutely dwarf that and frankly have virtually no benefit - or arguably decrease the quality of work produced).

ReplyQuote
Posted : 06/11/2019 11:22 am
thefuf
(@thefuf)
Active Member

Surely in most cases, there will be such a person/body?

In most cases, yes.

But what happens if an "investigatory action" is performed by a person who is legally capable of making that decision on his/her own?

I guess phraseology should encompass non-local stored data - which I would still argue is on a device and therefore the terminology technically would stand?…maybe clutching at straws.

Well, sometimes it's almost impossible to find out which device currently stores data in question. So, there should be another level of abstraction a system (a group of servers, "a cloud" instead of a device).

I guess we should understand everything that we do, but often accept that as its done by others its ok. Arguably its an unacceptable stance, but how do we solve this solution. We cant weaken a principle to suit the field because we currently dont do something? So surely the principle has to be that we must understand the process - where in reality how we achieve this is the problem to be addressed.

We can weaken the principle. It's "best/good practices", not "best/good theories".

Possibly, but should this mean that we shouldn't have it as a principle? I mean if we could do it effectively and efficiently we surely would do this validation - therefore I could argue that it should be a principle and the burden of achieving it be something we have to address?

That depends on your view of "effectively and efficiently". If we can do it "effectively and efficiently", then why not? But are we actually doing this "effectively and efficiently"?

The current principles do not mention any sort of validation. There is an opinion that it's not a tool, but skills and knowledge that should be tested. I disagree. If a tool occasionally spoils data on an original hard disk drive, then no skills/knowledge will help in getting overwritten data back. Thus, at least basic things like data acquisition methods & tools should be validated at some level. Personally, I distinguish between "data acquisition" (tools have great impact, often there is no second chance), "data interpretation" (tools have great impact, but you can always run another tool against same data, you can always open a HEX editor and do some manual work when parsing artifacts), and "data presentation".

ReplyQuote
Posted : 06/11/2019 11:25 am
thefuf
(@thefuf)
Active Member

and only a few (actually 2 FTK and Encase) for "Windows Registry Forensic Tool"

The first test plan for this area was written in 2018. Just give it some time.

ReplyQuote
Posted : 06/11/2019 11:30 am
tootypeg
(@tootypeg)
Active Member

Curiously, and Im not saying the proposed 8 principles are perfect etc, but is it a case that in the perfect world, these principles would be valid, its just practically they extremely difficult to implement?

SHould principles be compromised for real world expectation or should they define the perfect standard to aim to attain?

ReplyQuote
Posted : 06/11/2019 12:34 pm
jaclaz
(@jaclaz)
Community Legend

I don't believe it's not do-able at all or unuseful.

I think you're missing the point as the intention isn't to get a tool that's validated as being perfect. That cannot possibly happen.

Rest assured, I didn't miss that point.

The point was that, instead of hundreds of labs running extremely limited tests, likely designed not to fail, with the sole aim of passing ISO17025, wasting a huge amount of collective time and money, some co-ordinated testing of major tools, as they're released, would be far more efficient. You could get the central body to run the kind of limited test that everyone would be doing to achieve ISO17025 and report the results for everyone rather than duplicating the work 100 fold (or more). On top of that have other staff who're performing more wide ranging testing, on current versions of software, and reporting monthly on their findings in a short report (as well as a note immediately on a website). With a remit not to provide stale reports on imaging but to provide public bug reporting of issues with current tools (perhaps even allow examiners to post their bugs too for the benefit of all - regardless of whether they've reported to the manufacturer - so others can be aware of a potential unverified problem).

I perfectly understand that, but having the "limited testing everyone would be doing to achieve ISO 17025" done once by a centralized "official" body as opposed to doing the same thing 100 fold (or more) is a very nice step towards a better efficiency of the system, still that would remain ONLY "limited testing to achieve ISO 17025" (very unlike "proper" validation required by the ACPO principles , original or "NG").

So, as said IMHO, very nice ideas in theory but not enough in practice.

As well a centralized bug report system might be a very nice thing, but hardly working unless there is also a connection with the manufacturer of the software, the "others" might well become (more) aware of issues reported by colleagues, but if there is not a (prompt) remedy bug numbers will increase, to the point that situation will become unmanageable.

Only to give you an example of how a (disturbing, but IMHO not so serious/common) very specific bug has been managed inside the small community of Forensics Focus by two much esteemed members like thefuf and Passmark
https://www.forensicfocus.com/Forums/viewtopic/t=14057/postdays=0/postorder=asc/start=14/

Imagine the same, but instead with clueless people mis-reporting bugs and manufacturers uninterested to them and multiply that by 1000 or 10000 or 100000.

Besides there is a risk of "fake bugs" (not entirely unlike fake news).

Let's assume there is this central bug repository, who will prevent manufacturer of "software A" to have a huge number of bug reports on "software B" (by a competitor)?

You will need to validate each user reporting a bug …

Separately, have time and money allocated for some experienced forensic examiners and lawyers/barristers to periodically review current legal issues facing digital evidence (let's use the cloud collection example) and provide guidance on a six monthly basis (or urgently if something changes that everyone should know about). Perhaps even allow submissions (questions) from examiners about topics they're unsure of and the best ones could be addressed.

I don't know how well funded that NIST department is but I imagine they're not massively resourced and I think these kind of things should have significant amounts of money spent on them (considering how important digital evidence is in the majority of cases now……if the work has even been done at all and not screened out). A few million quid is absolutely nothing in the grand scheme of things (and frankly the time wasted by every company and lab performing ISO17025 will absolutely dwarf that and frankly have virtually no benefit - or arguably decrease the quality of work produced).

They are usually NOT very well funded (at least historically, JFYI)

https://www.forensicfocus.com/Forums/viewtopic/p=6569804/

The first test plan for this area was written in 2018. Just give it some time.

Sure, I guess that the overall plan is that by 2030 there will be an effective database of tests/reports that will solve the issue at hand.

jaclaz

ReplyQuote
Posted : 06/11/2019 3:08 pm
Rich2005
(@rich2005)
Senior Member

I perfectly understand that, but having the "limited testing everyone would be doing to achieve ISO 17025" done once by a centralized "official" body as opposed to doing the same thing 100 fold (or more) is a very nice step towards a better efficiency of the system, still that would remain ONLY "limited testing to achieve ISO 17025" (very unlike "proper" validation required by the ACPO principles , original or "NG").

So, as said IMHO, very nice ideas in theory but not enough in practice.

I'm not really sure what you're arguing for here. "proper" validation is completely unrealistic in practical terms in DF especially if not limiting it to one relatively narrow process like imaging. At present you have either token efforts (mostly to appear ISO compliant) but largely speaking nothing is really validated in any meaningful way (and would be impossible to do for every method used by every program applied to the all the possible data sets). In practice many people will likely simply try to mitigate things using a combination of knowledge/experience and dual-tooling.

As well a centralized bug report system might be a very nice thing, but hardly working unless there is also a connection with the manufacturer of the software, the "others" might well become (more) aware of issues reported by colleagues, but if there is not a (prompt) remedy bug numbers will increase, to the point that situation will become unmanageable.

They could, of course, have tie-ins with manufacturers, but even if they didn't it wouldn't not work. An examiner could easily have a quick flick through the weekly/monthly reports to be aware of problems with tools that they might not have otherwise known of. The same for any immediate bug listings produced by the unit (or other users). It would be something for people to simply keep in mind.

Only to give you an example of how a (disturbing, but IMHO not so serious/common) very specific bug has been managed inside the small community of Forensics Focus by two much esteemed members like thefuf and Passmark
https://www.forensicfocus.com/Forums/viewtopic/t=14057/postdays=0/postorder=asc/start=14/

Having been a member here for longer than most 😉 I'm well aware people MIGHT discuss a specific problem, equally I'm aware this is barely scratching the surface of the endless issues, whether hardware or software. If anything the example you cite gives weight to the purpose of having a centralised bug reporting system so that people can be aware of issues (and even discuss them if that functionality was added as an offshoot)

Imagine the same, but instead with clueless people mis-reporting bugs and manufacturers uninterested to them and multiply that by 1000 or 10000 or 100000.

Besides there is a risk of "fake bugs" (not entirely unlike fake news).

Let's assume there is this central bug repository, who will prevent manufacturer of "software A" to have a huge number of bug reports on "software B" (by a competitor)?

You will need to validate each user reporting a bug …

You could have the users vetted if you like, but I think you're worrying about a problem that doesn't exist, and probably wouldn't be likely to be a problem. If it became a problem you could easily solve it anyway. In reality people could simply check the list of potential problems and keep an eye out for them. If it turns out to be false then there's no problem.

They are usually NOT very well funded (at least historically, JFYI)

https://www.forensicfocus.com/Forums/viewtopic/p=6569804/

Hence why it shouldn't be a reason that a more useful version couldn't be created with better funding and aims.

ReplyQuote
Posted : 06/11/2019 3:38 pm
jaclaz
(@jaclaz)
Community Legend

I'm not really sure what you're arguing for here.

It seems to me like you are getting exactly what I mean and you actually expanded it perfectly

"proper" validation is completely unrealistic in practical terms in DF especially if not limiting it to one relatively narrow process like imaging. At present you have either token efforts (mostly to appear ISO compliant) but largely speaking nothing is really validated in any meaningful way (and would be impossible to do for every method used by every program applied to the all the possible data sets). In practice many people will likely simply try to mitigate things using a combination of knowledge/experience and dual-tooling.

The most an hypothetical centralized office/service could do would be to offer some vague mitigation directives to solve the actual unresolvable problem, which is "proper" validation, required by "classic" and "NG" principles and also by 17025 but that in your own words is not really validated in any meaningful way (now) and impossible for every method/tool/case (now or in the future).

In reality people could simply check the list of potential problems and keep an eye out for them. If it turns out to be false then there's no problem.

Sure, so (as an example) "people" finds on the list that "software A" has been reported to be buggy in "case X".
What should "people" do?
1) trust the bug/report and thus avoid to use "software A" in a case similar to "case X" (potentially losing some relevant evidence)
2) not trust the bug/report and use the "software A" in a case similar to "case X" (with or without a reference to the possibility that the evidence found may be not valid)
3) validate himself/herself "software A" in "case X" (back to square #1)

Hence why it shouldn't be a reason that a more useful version couldn't be created with better funding and aims.

Like (in the real world I mean) lack of money[1]?

jaclaz

[1] in the sense of being likely not a high priority issue for the government.

ReplyQuote
Posted : 06/11/2019 5:27 pm
dan0841
(@dan0841)
Member

I don't believe it's not do-able at all or unuseful.

I think you're missing the point as the intention isn't to get a tool that's validated as being perfect. That cannot possibly happen.

The point was that, instead of hundreds of labs running extremely limited tests, likely designed not to fail, with the sole aim of passing ISO17025, wasting a huge amount of collective time and money, some co-ordinated testing of major tools, as they're released, would be far more efficient. You could get the central body to run the kind of limited test that everyone would be doing to achieve ISO17025 and report the results for everyone rather than duplicating the work 100 fold (or more). .

You absolutely hit the nail on the head here. Some of the. current limited testing is a token effort which is designed to pass 17025 and is farcical. Even from accredited organisations.

It validates very little and is barely worth the paper it's written on. Mass duplication much of it devised to fudge a 'pass' of a tool/method.

ReplyQuote
Posted : 06/11/2019 8:04 pm
Page 1 / 5
Share: