ACPO Principles Rev...
 
Notifications
Clear all

ACPO Principles Revised

66 Posts
10 Users
0 Likes
6,842 Views
(@tootypeg)
Posts: 173
Estimable Member
Topic starter
 

Curiously, and Im not saying the proposed 8 principles are perfect etc, but is it a case that in the perfect world, these principles would be valid, its just practically they extremely difficult to implement?

SHould principles be compromised for real world expectation or should they define the perfect standard to aim to attain?

 
Posted : 06/11/2019 12:34 pm
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
 

I don't believe it's not do-able at all or unuseful.

I think you're missing the point as the intention isn't to get a tool that's validated as being perfect. That cannot possibly happen.

Rest assured, I didn't miss that point.

The point was that, instead of hundreds of labs running extremely limited tests, likely designed not to fail, with the sole aim of passing ISO17025, wasting a huge amount of collective time and money, some co-ordinated testing of major tools, as they're released, would be far more efficient. You could get the central body to run the kind of limited test that everyone would be doing to achieve ISO17025 and report the results for everyone rather than duplicating the work 100 fold (or more). On top of that have other staff who're performing more wide ranging testing, on current versions of software, and reporting monthly on their findings in a short report (as well as a note immediately on a website). With a remit not to provide stale reports on imaging but to provide public bug reporting of issues with current tools (perhaps even allow examiners to post their bugs too for the benefit of all - regardless of whether they've reported to the manufacturer - so others can be aware of a potential unverified problem).

I perfectly understand that, but having the "limited testing everyone would be doing to achieve ISO 17025" done once by a centralized "official" body as opposed to doing the same thing 100 fold (or more) is a very nice step towards a better efficiency of the system, still that would remain ONLY "limited testing to achieve ISO 17025" (very unlike "proper" validation required by the ACPO principles , original or "NG").

So, as said IMHO, very nice ideas in theory but not enough in practice.

As well a centralized bug report system might be a very nice thing, but hardly working unless there is also a connection with the manufacturer of the software, the "others" might well become (more) aware of issues reported by colleagues, but if there is not a (prompt) remedy bug numbers will increase, to the point that situation will become unmanageable.

Only to give you an example of how a (disturbing, but IMHO not so serious/common) very specific bug has been managed inside the small community of Forensics Focus by two much esteemed members like thefuf and Passmark
https://www.forensicfocus.com/Forums/viewtopic/t=14057/postdays=0/postorder=asc/start=14/

Imagine the same, but instead with clueless people mis-reporting bugs and manufacturers uninterested to them and multiply that by 1000 or 10000 or 100000.

Besides there is a risk of "fake bugs" (not entirely unlike fake news).

Let's assume there is this central bug repository, who will prevent manufacturer of "software A" to have a huge number of bug reports on "software B" (by a competitor)?

You will need to validate each user reporting a bug …

Separately, have time and money allocated for some experienced forensic examiners and lawyers/barristers to periodically review current legal issues facing digital evidence (let's use the cloud collection example) and provide guidance on a six monthly basis (or urgently if something changes that everyone should know about). Perhaps even allow submissions (questions) from examiners about topics they're unsure of and the best ones could be addressed.

I don't know how well funded that NIST department is but I imagine they're not massively resourced and I think these kind of things should have significant amounts of money spent on them (considering how important digital evidence is in the majority of cases now……if the work has even been done at all and not screened out). A few million quid is absolutely nothing in the grand scheme of things (and frankly the time wasted by every company and lab performing ISO17025 will absolutely dwarf that and frankly have virtually no benefit - or arguably decrease the quality of work produced).

They are usually NOT very well funded (at least historically, JFYI)

https://www.forensicfocus.com/Forums/viewtopic/p=6569804/

The first test plan for this area was written in 2018. Just give it some time.

Sure, I guess that the overall plan is that by 2030 there will be an effective database of tests/reports that will solve the issue at hand.

jaclaz

 
Posted : 06/11/2019 3:08 pm
(@rich2005)
Posts: 535
Honorable Member
 

I perfectly understand that, but having the "limited testing everyone would be doing to achieve ISO 17025" done once by a centralized "official" body as opposed to doing the same thing 100 fold (or more) is a very nice step towards a better efficiency of the system, still that would remain ONLY "limited testing to achieve ISO 17025" (very unlike "proper" validation required by the ACPO principles , original or "NG").

So, as said IMHO, very nice ideas in theory but not enough in practice.

I'm not really sure what you're arguing for here. "proper" validation is completely unrealistic in practical terms in DF especially if not limiting it to one relatively narrow process like imaging. At present you have either token efforts (mostly to appear ISO compliant) but largely speaking nothing is really validated in any meaningful way (and would be impossible to do for every method used by every program applied to the all the possible data sets). In practice many people will likely simply try to mitigate things using a combination of knowledge/experience and dual-tooling.

As well a centralized bug report system might be a very nice thing, but hardly working unless there is also a connection with the manufacturer of the software, the "others" might well become (more) aware of issues reported by colleagues, but if there is not a (prompt) remedy bug numbers will increase, to the point that situation will become unmanageable.

They could, of course, have tie-ins with manufacturers, but even if they didn't it wouldn't not work. An examiner could easily have a quick flick through the weekly/monthly reports to be aware of problems with tools that they might not have otherwise known of. The same for any immediate bug listings produced by the unit (or other users). It would be something for people to simply keep in mind.

Only to give you an example of how a (disturbing, but IMHO not so serious/common) very specific bug has been managed inside the small community of Forensics Focus by two much esteemed members like thefuf and Passmark
https://www.forensicfocus.com/Forums/viewtopic/t=14057/postdays=0/postorder=asc/start=14/

Having been a member here for longer than most 😉 I'm well aware people MIGHT discuss a specific problem, equally I'm aware this is barely scratching the surface of the endless issues, whether hardware or software. If anything the example you cite gives weight to the purpose of having a centralised bug reporting system so that people can be aware of issues (and even discuss them if that functionality was added as an offshoot)

Imagine the same, but instead with clueless people mis-reporting bugs and manufacturers uninterested to them and multiply that by 1000 or 10000 or 100000.

Besides there is a risk of "fake bugs" (not entirely unlike fake news).

Let's assume there is this central bug repository, who will prevent manufacturer of "software A" to have a huge number of bug reports on "software B" (by a competitor)?

You will need to validate each user reporting a bug …

You could have the users vetted if you like, but I think you're worrying about a problem that doesn't exist, and probably wouldn't be likely to be a problem. If it became a problem you could easily solve it anyway. In reality people could simply check the list of potential problems and keep an eye out for them. If it turns out to be false then there's no problem.

They are usually NOT very well funded (at least historically, JFYI)

https://www.forensicfocus.com/Forums/viewtopic/p=6569804/

Hence why it shouldn't be a reason that a more useful version couldn't be created with better funding and aims.

 
Posted : 06/11/2019 3:38 pm
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
 

I'm not really sure what you're arguing for here.

It seems to me like you are getting exactly what I mean and you actually expanded it perfectly

"proper" validation is completely unrealistic in practical terms in DF especially if not limiting it to one relatively narrow process like imaging. At present you have either token efforts (mostly to appear ISO compliant) but largely speaking nothing is really validated in any meaningful way (and would be impossible to do for every method used by every program applied to the all the possible data sets). In practice many people will likely simply try to mitigate things using a combination of knowledge/experience and dual-tooling.

The most an hypothetical centralized office/service could do would be to offer some vague mitigation directives to solve the actual unresolvable problem, which is "proper" validation, required by "classic" and "NG" principles and also by 17025 but that in your own words is not really validated in any meaningful way (now) and impossible for every method/tool/case (now or in the future).

In reality people could simply check the list of potential problems and keep an eye out for them. If it turns out to be false then there's no problem.

Sure, so (as an example) "people" finds on the list that "software A" has been reported to be buggy in "case X".
What should "people" do?
1) trust the bug/report and thus avoid to use "software A" in a case similar to "case X" (potentially losing some relevant evidence)
2) not trust the bug/report and use the "software A" in a case similar to "case X" (with or without a reference to the possibility that the evidence found may be not valid)
3) validate himself/herself "software A" in "case X" (back to square #1)

Hence why it shouldn't be a reason that a more useful version couldn't be created with better funding and aims.

Like (in the real world I mean) lack of money[1]?

jaclaz

[1] in the sense of being likely not a high priority issue for the government.

 
Posted : 06/11/2019 5:27 pm
(@dan0841)
Posts: 91
Trusted Member
 

I don't believe it's not do-able at all or unuseful.

I think you're missing the point as the intention isn't to get a tool that's validated as being perfect. That cannot possibly happen.

The point was that, instead of hundreds of labs running extremely limited tests, likely designed not to fail, with the sole aim of passing ISO17025, wasting a huge amount of collective time and money, some co-ordinated testing of major tools, as they're released, would be far more efficient. You could get the central body to run the kind of limited test that everyone would be doing to achieve ISO17025 and report the results for everyone rather than duplicating the work 100 fold (or more). .

You absolutely hit the nail on the head here. Some of the. current limited testing is a token effort which is designed to pass 17025 and is farcical. Even from accredited organisations.

It validates very little and is barely worth the paper it's written on. Mass duplication much of it devised to fudge a 'pass' of a tool/method.

 
Posted : 06/11/2019 8:04 pm
(@tootypeg)
Posts: 173
Estimable Member
Topic starter
 

In terms of testing, can someone give me an example of what should be done as I accept there is likely as you all state 'tests designed to pass'

How for example do we test a mainstream tool (X-ways, FTK, Encase etc etc). Is this what you mean?

 
Posted : 06/11/2019 8:53 pm
(@trewmte)
Posts: 1877
Noble Member
 

I don't believe it's not do-able at all or unuseful.

I think you're missing the point as the intention isn't to get a tool that's validated as being perfect. That cannot possibly happen.

The point was that, instead of hundreds of labs running extremely limited tests, likely designed not to fail, with the sole aim of passing ISO17025, wasting a huge amount of collective time and money, some co-ordinated testing of major tools, as they're released, would be far more efficient. You could get the central body to run the kind of limited test that everyone would be doing to achieve ISO17025 and report the results for everyone rather than duplicating the work 100 fold (or more). .

You absolutely hit the nail on the head here. Some of the. current limited testing is a token effort which is designed to pass 17025 and is farcical. Even from accredited organisations.

It validates very little and is barely worth the paper it's written on. Mass duplication much of it devised to fudge a 'pass' of a tool/method.

Worth a (re)read

An analysis of digital forensic examinations Mobile devices versus hard disk drives utilising ACPO & NIST guidelines
Digital Investigation 8 (2011) 135 - 140 [doi10.1016/j.diin.2011.03.002]

The guidelines requirements set out clear expectations of what a tool claims to do and what it can actually do after the vigorous testing phases.

Both NIST and ACPO guidelines need to be updated quite frequently as mobile devices are constantly evolving and their features becoming more ubiquitous. The forensic regulator for the UK, Mr Andrew Rennison, appointed in 2008, and his committee are being tasked with reviewing the principles of ACPO. It was stated by a senior police officer at CFET, 2009 that he is aware that the current ACPO principles require “modernising” to cope with the rapid changes in technology.

MTEB UK SEMINARS 2016 II v03- QA Lab Accreditation.pdf
https://www.dropbox.com/s/kun2gx64t5fzu5y/MTEB%20UK%20SEMINARS%202016%20II%20v03-%20QA%20Lab%20Accreditation.pdf

 
Posted : 07/11/2019 8:53 am
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
 

In terms of testing, can someone give me an example of what should be done as I accept there is likely as you all state 'tests designed to pass'

How for example do we test a mainstream tool (X-ways, FTK, Encase etc etc). Is this what you mean?

Look no further than the (already linked to) NIST/DHS tests on Encase and FTK (limited to their Registry search/parsing functions, i.e. a very small subset of what they can do).

https://www.forensicfocus.com/Forums/viewtopic/p=6600956/#6600956

Look at the various (many) entries in the "test dataset" and at the (huge) number of features tested, it is a LOT of work to
1) create all kinds of "common" and "uncommon" items in the test dataset
2) check all the features
3) compare results with the actual (known) contents of the data sample at hand

I believe that *any* laboratory will take a couple machines/real cases, check a small subset of the features (i.e. the most common features on a very common test dataset, which will most probably will pass fine [1]) and call it a day.

N.B. if you check the "defects" found, none of them are actually particularly "serious", but - only as a single example for Encase - the

External Device
◦ Partial external device related data was reported. [Windows ALL]
- The tool identified all USB storage devices, but it did not report several device
related metadata such as ‘Last Connected Date’.

might (or might not) make a difference in a real case.

jaclaz

[1] It is not like FTK and Encase are not tested tools (and it is years that they are around and - speaking of a Windows Registry - it is basically the same since NT 3.1), of course they are tested and work just fine on - say - 99.99% of the registries you can find.

Imagine the corresponding issues in mobile forensics where each and every manufacturer and/or each and every phone model changes something continuously …

 
Posted : 07/11/2019 11:16 am
(@athulin)
Posts: 1156
Noble Member
 

In terms of testing, can someone give me an example of what should be done as I accept there is likely as you all state 'tests designed to pass'

Some kind of basic functionality tests. File metadata, particularly that which influences forensic questions. Timestamps, ownership, access rights, and perhaps metadata that affect other things that are of particular forensic interest. Similar metadata from file archives, from backup files, restore points, what have you.

Does the tool extract them correctly? (EnCase managed to get confused exFAT timestamps pretty early, claiming that one timestamp was another, and vice versa. Something I felt should have been caught if even minimal quality assurance had been present.) Are they interpreted correctly, or at least unambiguously (You don't want to find that a tool reports some few hundred different timestamp (from a binary perspective) as the same timestamps (when it's reported to the user)).

Basic interpretation for additional data e.g. is ownership information (typically some kind of binary data) converted to correct readable unambiguous user information? What about access rights? What about correct handling of any version changes? (A bit like tools that know about /etc/passwd, but can't cope with /etc/shadow. I don't know of any, but any change in OS implementation affecting these areas must also be treated correctly. (Like add an /etc/shadow file to an operating system that doesn't use it only confuses the forensic analyst along with any tool he/she may rely on.)

The basic principle, I think, is to identify the information that must be handled correctly and reliably if it is to be of any forensic value. And then test that.

Add to that correct identification of failure situations. This is difficult to explain, but … there are ISO 9660 volumes that are correctly formatted according to the standard, but which most forensic tools will not recognize as ISO 9660 volume, typically saying that this is not a correctly formatted file system, or in bad cases, simply crash over. If the tool tells you that is not an ISO 9660 file system incorrect, further processing of an anomalous (bur correct) filesystem may be stopped short. If the tool had said 'this looks like an ISO 9660 file system, but I can't make sense of … whatever' , an FA would be in a much better position to make a correct decision.

(Anyone asking 'really? how often does that really happen?' only proves my point. For reasons that should be obvious.)

RAID reconstitution tools probably belong here as well do they work in situations where they shouldn't?
Human error in general is closely related area of testing.

A related form of testing is meta-testing what defects have been reported and corrected (or remains uncorrected) over the past year or so. The manufacturer will simply have to provide that information. Can any conclusions be drawn about system errors in tool development, in areas of functionality, in quality assurance? Those conclusions could easily tell us where additional testing may be required.

(Case in point the Danish problem of cell-phone forensics https://www.forensicfocus.com/Forums/viewtopic/t=18014/)

 
Posted : 08/11/2019 9:13 am
 CCFI
(@ccfi)
Posts: 18
Active Member
 

Hi all

We provided a specialised forensic examination service specialising in forensic examination of electronic payment systems - ATM and Payment terminal systems, skimmers, chip and pin, contactless etc etc etc

We shut the company when the regulator insisted on ISO 17025 as it did not fit our business model and would be prohibitively expensive for a small specialist business such as ours and would require us to reveal very sensitive commercial information to a third party

Several of the tools that we used were developed with help from the bank technical experts and for obvious commercial reasons were very confidential

They allowed us to examine a compromised payment terminal or payment system to establish which bank accounts had been compromised

That information could then be confirmed from other sources such as bank accounts and transaction records etc

So my question is this - if the information we retrieved from a compromised payment device can be confirmed from more public records why should we reveal a very commercially secret technique for public scrutiny and potentially compromise the electronic banking system worldwide to explain how we found information that could be verified from more public sources?

 
Posted : 12/11/2019 9:18 pm
Page 2 / 7
Share: