Assumptionware, a n...
 
Notifications
Clear all

Assumptionware, a neologism by Jonathan Zdziarski

9 Posts
4 Users
0 Likes
621 Views
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
Topic starter
 

In a relatively recent blog post, Mr. Zdziarsky coined this neologism that I really appreciate ) , as well as most of the opinions expressed on some forensic software and forensic investigators.

Here
http//www.zdziarski.com/blog/?p=3717

Many tools are irresponsible and do not respect evidence, to say the least. But this is the quality of the forensics software assisting our government and military. Poorly written, over-priced assumptionware. The reports I’ve read had these and many other flaws in them. And sadly, this isn’t the only case I’ve worked on where I’ve been asked to review reports from third party tools. I’m often asked to consult on cases when commercial solutions have failed or fallen apart, and my own more hands-on techniques are required. To this end, forensics feels more like janitorial work than science.

jaclaz

 
Posted : 21/09/2014 5:27 pm
(@athulin)
Posts: 1156
Noble Member
 

In a relatively recent blog post, Mr. Zdziarsky coined this neologism that I really appreciate ) […]

Interesting post - worth reading. Particularly together with that 2009 report from National Academies of Science (Strengthening Forensic Science in the United States A Path Forward). I had just watched the PBS Frontline program 'The Real CSI' (old, but new to me), so it was doubly apposite.

Though I think I disagree somewhat (only somewhat, mind) with the statement that validation isn't possible with closed source it's possible to cook up hostile testing data to stress the tools in question. While this strictly speaking should be done very systematically – and thus is time-consuming, thus expensive – it seems that some tools I look at curl up as soon as they're exposed to anything slightly out of the ordinary.

While a 'real' validation would be targeted to a particular tool, it is still possible to create tool-agnostic but hostile test data that could be reused. Less comprehensive, true, but still useful. Very little software seems to have come even close to hostile testing, and some products seem to have a history of exterminated bugs reappearing in the next major release, probably due to lack of proper source code control and/or QA.

But just catching the low-hanging fruit is, as any pen-tester knows, not to be despised.

 
Posted : 21/09/2014 11:13 pm
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
Topic starter
 

Though I think I disagree somewhat (only somewhat, mind) with the statement that validation isn't possible with closed source it's possible to cook up hostile testing data to stress the tools in question.

I find the original statement a little more subtle

Since it’s closed source, the community at large can’t validate the tools either. If investigators aren’t doing their homework to individually validate the artifacts on every case (and subsequently provide feedback to the software manufacturer), the consequences could mean

bolding and underlining is mine.

A closed source, commercial only (and possibly additionally LE only) tool has definitely less users (as sheer numbers) and thus less chances to be run on "random" or simply "more" data and more people, and there are of course less chances that a bug (if any) is found.

I read it more like a way to say (not entirely without reason) that the forensic investigators should be more proactive (or less lazy wink , it depends on whether you see the glass half full or half empty) and that the makers of the software (of course not all, but a few of them surely) could be more careful in the tests and more reactive to reports.

jaclaz

 
Posted : 21/09/2014 11:36 pm
(@joachimm)
Posts: 181
Estimable Member
 

The problem runs much deeper than just the software.

> In forensics, we often misplace our trust in tools that,
Not only tools also processes, reference material, the source data.

IMO bottom line there is a general lack of "Thesis, anti-thesis, synthesis" http//en.wikipedia.org/wiki/Thesis,_antithesis,_synthesis

A lot of the software is created based on the information (articles, papers, books) out there, often this information is incomplete and sometimes even incorrect. This is not necessarily bad by itself, but very bad when assumed to be the truth. So the source material is not validated.

That there are a lot of companies out there that hire programmers not domain experts to write software is very understandable. Finding someone with both skills is rare. Since these programmers are not domain experts, and these companies want to make their customers happy, they might apply software QA methodologies but unlikely digital forensic QA.

So how can you (as the domain expert) validate the output of a tool when you don't know what the ground truth is?

Open vs closed source is this reoccurring discussion, alas nothing new, e.g. an article by Brian Carrier from around 2003 http//www.digital-evidence.org/papers/opensrc_legal.pdf

 
Posted : 26/09/2014 11:17 am
pbobby
(@pbobby)
Posts: 239
Estimable Member
 

Fine sentiment and all - but the job still has to be done. Can't be paralyzed by the potential for errors, gaps in data and the like. Still need to put the guy behind bars.

If your case hinges on one or two critical pieces of information - spend the effort to validate manually or with multiple tools. But get that analysis started….

 
Posted : 26/09/2014 4:40 pm
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
Topic starter
 

Fine sentiment and all - but the job still has to be done. Can't be paralyzed by the potential for errors, gaps in data and the like. Still need to put the guy behind bars.

If your case hinges on one or two critical pieces of information - spend the effort to validate manually or with multiple tools. But get that analysis started….

Sure, and let's put people behind the bars because of these potential errors, since they are guilty anyway.

Another snippet from the article that you probably missed

I worked from a physical image dump created by a commercial forensics tool, and three reports from various tools which, as it would turn out, appeared to be misreporting (or at best “under explaining”) at least some of data that the case would later hinge on. What the tools didn’t report turned out to be much more interesting than what they did, and this – combined with whatever other evidence the Army had gathered – eventually led to the turnaround of the case.

When I analyze evidence, I try to do so without knowing exactly what kind of “smoking gun” I’m looking for; often times, I generate a long report with sets of dates and activities, and then afterwards discuss the details of the case with the attorneys to see how relevant my findings are, and we figure out the context that best explains the artifacts. This seems more honest to me than going on a hunt for specific data. I know a number of “professional” forensic examiners who only search for what they’re looking for to prosecute a case – an image, a text, what have you, and then ignore the rest of the evidence on the device, which may exonerate the suspect.

There was no delay in the starting of the analysis in the referenced case, the analysis was made alright, and more than once, BUT it was reported how reviewing these reports led to believe that the results from (I presume "market leading" tools and "top of the notch" forensic investigators) were flawed or misrepresenting (or "under explaining") the sequences of events or lacked proper explanation of cause-effect relationships, to such an extent that some charges were dropped.

Besides the specific case, I find it something that the community of pro's should spend some time to think on the actual quality of the tools they use daily and on the way they use them.

This won't IMHO cause the job to be not done today, but hopefully it may help in doing a better job tomorrow.

jaclaz

 
Posted : 26/09/2014 5:43 pm
pbobby
(@pbobby)
Posts: 239
Estimable Member
 

Sure, and let's put people behind the bars because of these potential errors, since they are guilty anyway.

Who is claiming that digital forensics is an exact science? And the exactness is not always measured by how much testing has been done against your toolset. Tools or a manual approach, the 1s and 0s are rarely cut and dry.

Like I said - don't become paralyzed just because you fear potential errors. Which a human can make just as easily by doing something manually as they can with a tool.

It's these kind of articles that hearken back to the days of 'I build websites using notepad' when is your testing ever enough to start trusting the tool? What's the point of having tools then? Is this some sort of badge that we must wear on our arms? I'm better than you because I do forensics manually?

These sorts of FUD come out every year - I guarantee everyone here, including the 'big names' in this thread, all use tools tools that change, tools that were given a cursory test once or twice.

 
Posted : 26/09/2014 7:08 pm
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
Topic starter
 

Who is claiming that digital forensics is an exact science?

Noone AFAIK.

Someone is wishing that it could become more accurate than it is now ad claiming that this can be achieved by being more accurate in the testing, documenting and using of the tools or by making "better" tools.

jaclaz

 
Posted : 26/09/2014 8:01 pm
(@joachimm)
Posts: 181
Estimable Member
 

Can't be paralyzed by the potential for errors, gaps in data and the like.

Who says you have to be paralyzed?

If your case hinges on one or two critical pieces of information - spend the effort to validate manually or with multiple tools.

Interesting point; so how would you go about this? Look on the internet for articles, read a book? Do you cross validate the findings in the article or book. I recently ran into serious error in a book that even states to be a forensic book. The fact that have done a lot of validation directly made it clear to me that this book was full of errors. I reported the errors to the authors, but is very disappointing that the authors have not address various errors they made in computer science fundamentals. It is even more disappointing to see various, to use your terminology, "big names" claiming the book is good while it is full of errors.

So it would be interesting to hear how you validate your tool?
I largely do it by writing my own, and documenting my findings in the process.

when is your testing ever enough to start trusting the tool?

There never will be too many edge cases, so how can we, as a non-exact science address that?
* By applying scientific methods
* By making sure to validate your findings when it matters; when does it matter that will depend
* By having open format specifications
* By having tooling that is transparent in what it finds (not necessarily open source)
* By having proven and validated methodologies (which is ironic if you consider Zdziarski a couple of years ago)

These sorts of FUD come out every year - I guarantee everyone here, including the 'big names' in this thread, all use tools

FUD or not will largely depend on how you approach the discussion.

 
Posted : 27/09/2014 1:04 am
Share: