Harlan,
There are a number of things which I've discussed but that is one of them, yes. It all ties together.
Thanks for the effort, I'm not quite sure where to look. I suppose findlaw.com may have some info.
Here's what I've gotten so far…
From some cops, volatile data amounts to evidence that a law enforcement officer saw…an example is a LEO who saw a guy with a red shirt run into a convenience store. This can be used as evidence/testimony if a guy with a red shirt allegedly robbed the store.
However, I have yet to have anyone respond saying that they used volatile data in court as evidence.
I was pointed to an article by Chris Brown and Erin , published by Elsevier, but I don't have access to the article. I do, however, have access to a presentation
http//
This presentation focuses primarily on imaging a live system and not so much collecting volatile data. However, I think it does provide some interesting information along those lines.
Over the next week, I should have the opportunity to speak to some people about this sort of thing, and get some more detailed information.
Thanks,
Harlan
This was a presentation givien at the Techno Forensics conferences last November. Kind of a sales pitch but there is some interesting info on live investigations.
http//
Hogfly,
I've spoken to a few folks regarding the case law issue, and so far, no one's been able to point me to actual case law.
However, the camps seem to be firmly divided. On the one hand, there are the purists who state that unless the system is unplugged, you don't have evidence at all. Then there are the others, mostly current and former cops, who are saying that if your jurisdiction has laws against destroying evidence, you should consider yourself in violation if you *don't* collect volatile data.
I've got another avenue I'm trying…specifically, the article that comes from the presentation I linked to in my previous post.
What I think this comes down to is the fact that the same principles as are used in the real world with murder cases, etc., are not being applied to the digital realm…yet, anyway. I agree that there is some work that needs to be done…but we can't sit around waiting for that work to be done.
Harlan
Thanks for the links guys.. –sorry been really busy recently.
Harlan,
In your opinion, why is the forensic field so separated? Shouldn't the science be able to prove to us which the best method is? In real world forensics there are accepted practices to match shell casings to a gun and conduct dead body analysis. The field seems to be at a "do what you will, ust keep notes and don't modify the evidence" phase.
Is there a solid method of testing live response tools that's been developed? The Helix discussion is only one of many that needs to take place. Tool validation efforts need to be stepped up.
> In your opinion, why is the forensic field so separated?
In the real world, someone's done the work already. Blood/DNA analysis, fingerprints, etc…someone's gone out, done the research, documented it, had it peer reviewed, etc. That leap hasn't been made to the digital realm yet. For all of the folks who claim to be "in the field", there are very, very few who innovate in any way.
On the live response side, there are folks out there doing this on a regular basis. However, they're doing it…not sitting back and being an armchair quarterback about why you *can't* do it.
A lot of the folks on the other side of the fence are Linux bigots (ie, those who feel that "if you can't do it on Linux, it's not worth doing"), and don't see the point of doing *anything* on Windows.
You've also got the issue that most techies hate to document anything. Write it down? Why bother? I have it all in my head…oops, now what was that URL?
Finally, there's shiny object syndrome. How many people post to this list (or others, like SecurityFocus) and then you never hear from them again? When you do hear back, you find out that within seconds of hitting the "Submit" button, they got pulled off on something else.
> Is there a solid method of testing live response tools that's been developed?
I'm sure there has…I've posted to these forums before on the methodology I use. Testing atomic actions (one thing at a time) using tools like InControl5, lets me see the changes that occur. Tools like RegMon and FileMon let me see which process was responsible for the changes. Still other tools, such as file system monitors, will let me see files that were created and deleted during the testing process, that don't appear in the InControl5 output. Ethereal lets me see if something is trying to communicate over the network during testing.
Static testing of the tools can also be performed. For example, does a tool use the necessary APIs to access/modify the Registry (via it's import table from the PE headers)? If it doesn't, then it's likely that the tool itself isn't responsible for the changes that occurred to the Registry.
I think what's happened is that someone figures that they may want to do something like this, but then get confronted with what they feel is the enormity of the task. What? Document my entire methodology…you mean I can't just say, "I ran some tools"? I have to say *which* tools, which version, where I got them? The thing about documenting your methodology is that it makes your testing reproduceable…I should be able to document what I did to the extent that someone else should be able to follow the methodology and get the same or very similar results.
It's really not all that hard…I'm working on something like this now for my next book. The thing is that it has to be done, and to be used. If it's not good enough, don't just walk away from it…say why it's not good enough, and what needs to be done to improve it…and then improve it.
Harlan
To continue…
> Shouldn't the science be able to prove to us which the best method is?
Well, the first issue is that to most folks, it's not a science. Many feel that it's an art, but IMHO, that's just a way to avoid work.
The "best method" is determined by the situation. The first responder needs to be smart enough and trained enough to be able to make a reasoned decision. Many folks avoid such a thing simply b/c of things like the number of DLL files in the system32 directory.
Computer forensics is very deterministic…at it's very basic level, everything is either a 1 or a 0. Either the file or evidence is there, or it's not. Just because you don't know that some Registry keys are ROT-13 strings, doesn't make it "magic".
Harlan
For anyone coming to the Cybercrime Summit in Atlanta end of February I'll be lecturing/instructing on this topic, "Up in Smoke - Live Analysis". Might be of interest to some of y'all posting here.
regards,
farmerdude
farmerdude,
I'm planning on attending –gotta work it out with accounting )
So you're saying that some of the reasons why there isn't contemporary documentation about tool testing and validation methodology is because
Techies loath to write things down and therefore cant be bothered?
Shiny object syndrome compels people to put in a little effort initially and then they loose interest and give up?
With respect to your advanced knowledge I think thats a complete cop out. There are transient newcomers to all disciplines and I'm sure those 'techies' are not in the majority within our field. Therefore it falls to the senior and knowledgeable members of our community, by which I mean computer forensic science as a whole, to pioneer and push these things forward.
> Is there a solid method of testing live response tools that's been developed?
I think this means an industry type standard thats widely adopted and accepted. What benchmark do the developers of this software measure up against, if any?
>I think what's happened is that someone figures that they may want to do something like this, but then get confronted with what they feel is the enormity of the task. What? Document my entire methodology…you mean I can't just say, "I ran some tools"? I have to say *which* tools, which version, where I got them?
A lot of the computer forensics field is made up of highly qualified academics and computer scientists that have an extensive background in provable, repeatable and rigorous research. Since these people aren't lazy and yet there is a lack of validation and testing methodology, I fail to see how this is the cause.
> Shouldn't the science be able to prove to us which the best method is?
I'm not sure what you mean by 'the' science, but if you mean science in general then in theory yes it should. Science and mathematics are used all the time, and at all levels, to refine procedures and techniques and remove redundant or nugatory effort. Can this be applied to testing live response tools or methods? Probably. Any time soon? Probably not.
Just my thoughts. As usual I include my disclaimer that if incorrectly misinterpreted your posts, or I get the wrong end of the stick, then I apologise!