Dealing with opensource tools (sleuthkit/ autopsy, scalpel,..) the question comes up on certifications Are we sure what the tool is doing?
How do you deal with this issue?
Dealing with opensource tools (sleuthkit/ autopsy, scalpel,..) the question comes up on certifications Are we sure what the tool is doing?
How do you deal with this issue?
I would suggest that's the same applies with any tool open-source or otherwise. I have found and reported a number of faults in leading 'forensic' software. It also depends on what you're trying to achieve.
Dual (or more) tooling, testing / experimentation are 2 methods I use regularly.
Dealing with opensource tools (sleuthkit/ autopsy, scalpel,..) the question comes up on certifications Are we sure what the tool is doing?
I'm not sure that's the right question. To my mind, the correct question should be how do you ensure that software defects, insufficiently trained analysts and other factors do not affect the quality of the forensic analysis.
In other areas of software applications – especially in areas where software failures affect lives of people – quality assurance is pretty important. One way to get some input is to ask your software suppliers to present their quality assurance program. If they can't or won't, or if you feel their program in inadequate, you don't use their software, or add your own acceptance testing before you use it (which may be a bit of a burden on smaller companies, unfortunately).
Or … you look at your own quality assurance program how do *you* ensure that you don't produce a poor analysis? One of the activities would presumably be asking the soiftware providers for their QA processes, as already described. Furter methods may be verifying all results by at least two different tools (which unfortunately doesn't meet the requirements strictly, as it is not all unheard of that two different tools have similar defects – NTFS timestamps is one area where I see some typical errors), or you require that each analysis project gets assigned a project critic whose job is to find all your errors *before* you deliver the report – a kind of analysis proof reader.
Some errors you may be able to live with, as long as you can detect them. For example, if I see a NTFS timestamp reported as 'illegal date/time' (or not at all – empty field) I usually find it to be due to a bug in some underlying time conversion code. Those timestamps need to be done manually instead. That particular step then becomes a part of my SOP, which is another aspect of QA.
Or … something that fits your own organization better, and is more suited to your economic situation.
@Athulin
You're right. That's what I mentioned.
The way I do an analyze is documented in a reproducebla manner and I only describe facts (like User ID has logged on at a special time and not person x).
But doing things with tools and not by hand can rise up question "has the tool done what it should be". We can get a certification over here (QA) how we do the steps. But in the end we trust the results of tools in doing our job. This point I would like to eliminate.
Hej da.
Speaking for myself, I keep a reference image. Each and every time one of my tools has a new release or bug fix, I run it against this image. I have exhaustively analyzed this image and know the results which should appear. If a tool finds evidence that has not been found before I have find out why and if all other tools are faulty on that point. Likewise if the new release does NOT find somethng that has been found before I have to track that down as well. I also use same procedures with updates on imaging software, and to periodically check my write blocking devices.
I'll use the references now to start validation of software
http//dftt.sourceforge.net/
http//
That seems to be a good way to show that a tool set is capable to do what it is intend to.
I've set up a set of tools to use and idea is to use the right tool(s) for the specific testcase and to document what i found and if it's the same value mentioned. That's the best i can do.