Notifications
Clear all

Live Forensics

25 Posts
6 Users
0 Reactions
2,303 Views
 twig
(@twig)
Active Member
Joined: 19 years ago
Posts: 9
Topic starter  

I want to quantify the impact a live forensics tool has had on the system it has been used on but am not too sure how to go about performing the quantification.

From my research I see one method to achieve this is to perform a bit comparison between two memory dumps of the target system before and after the tool has been used in order to see the amount of bits that have been changed.
Not much information is available on how this comparison is performed, but i would imagine while the system is running it is constantly changing so no two memory dumps taken at different instances of time would be the same and furthermore the process used to collect the memory dump would have further impact on the system?

Would you recommend using this method or can anybody reccommend a better way of performing such an analysis?
Thanks


   
Quote
 ddow
(@ddow)
Reputable Member
Joined: 21 years ago
Posts: 278
 

Just thinking outloud here, are there hardware cards that can dump running memory. Diagnostic or research type equipment? If so, you can dump memory, install and run the live acquisition program and dump memory during that. Analysis will be, uh, painful. Good contribution to the field however as we'll see live acquisition more and more.


   
ReplyQuote
 twig
(@twig)
Active Member
Joined: 19 years ago
Posts: 9
Topic starter  

Thanks Dennis

I would be very interested in a product which could be used for hardware based acquisition of volatile memory. I've been looking into it and there are some prototype versions like tribble but i cannot find any commercial products which could be used for RAM acquisition, does anybody know of any such products?


   
ReplyQuote
(@tgoldsmith)
Eminent Member
Joined: 19 years ago
Posts: 35
 

Hi Twig,

There isn't much around publically. Tribble is lying in someones basement in the prototype stage. Komoku were meant to be working on one or something similar (Copilot) but I haven't heard anything recently.

You could grab the contents physical memory (although not perfectly) through a firewire card. Try looking at this presentation for some downloadable code if you don't want to go through the process of developing it yourself. I've not tried it this tool personally but some people have had success with it.

For more general information on memory analysis in order to support your aims, check out Aaron Walter's 4tphi.net which has a ton of links to papers and resources.


   
ReplyQuote
(@echo6)
Trusted Member
Joined: 21 years ago
Posts: 87
 

You could grab the contents physical memory (although not perfectly) through a firewire card.

When it works, it works well. A lot depends on the manufacturers implementation of their firewire hardware. It can cause the BSOD, from my experience mostly when used against Windows 2000.

Even using the firewire method leaves some traces on the system.

I've used vmware when examining tools impact on the system. Which worked fine but had to use a physical machine when looking at the firewire memory acquisition.

Also, check out Joanna Rutkowski's blog " Beyond The CPU Cheating Hardware Based RAM Forensics"
http//theinvisiblethings.blogspot.com/2007/01/beyond-cpu-cheating-hardware-based-ram.html

Another reason for looking at analysis tools behaviour is to assess how vulnerable they may be to malicious code or anti-forensics. I noticed the other thread on here concerning anti-forensics, and investigators are more vulnerable under live forensic examination conditions than they would be following a post mortem exam.


   
ReplyQuote
keydet89
(@keydet89)
Famed Member
Joined: 21 years ago
Posts: 3568
 

Twig,

> I want to quantify the impact a live forensics tool has had on the system it
> has been used on but am not too sure how to go about performing the
> quantification.

Well, let's reason through this…what is it you're trying to quantify?

Are you trying to quantify how much memory the tool used while running on the system? Or are you trying to determine the non-volatile artifacts left by the use of the tool?

For the effects that a tool has on the volatile memory of a system, you can do a number of things, many of which echo6 points out in his thesis. You can run performance monitoring tools that check for the amount of memory used, for example. You can also run the tool and perform a live dump of RAM while the tool is running, then determine the number of memory pages the process consumes, how many threads are running, etc.

I think it might help if you can sort of narrow down what you're looking for.

Thanks,

H


   
ReplyQuote
 twig
(@twig)
Active Member
Joined: 19 years ago
Posts: 9
Topic starter  

Sorry for the delay in my reply i have been away. Thank you for all the advice.

Echo6, you said you use vmware to monitor the effect that live tools have on the system, how is this done? I would be interested in reading your thesis keydet89 mentioned it if possible could you provide a link to it?

Keydet89, in answer to your question, what i want to do is quantify how much memory the tool has used while running on the system.

I have also been looking into writting code to comparing two dd images one taken before and one after the tool was used, in order to see the total number of bytes that have changed. I have not been having much luck with this can anybody recommend the best way to compare dd images?


   
ReplyQuote
(@tgoldsmith)
Eminent Member
Joined: 19 years ago
Posts: 35
 

Hi Twig,

Just thinking out loud here, but hopefully this will help. I've tried it before and it worked quite well.

Put all the tools you want to test (I assume they are IR tools?) on an ISO.

Start a VMWare machine with and OS of your choice installed and the ISO loaded as a virtual disk in the drive.

Once it has booted, suspend the VM and copy its .vmem file to another location. Resume the VM and run the tool you want to test. When it has finished executing, quickly suspend the VM again and make a second copy of the .vmem file. If the tool is more complex, you could make repeated snapshots over time to track memory usage.

For example, when you pop a Helix CD in the drive it might autoplay and run the GUI unless you prevent it from doing so. I'd imagine loading in the (Macromedia based?) GUI probably modifies quite a bit of memory anyway what with all the nice graphics being loaded. You might want to snapshot once the GUI has loaded and once again once you have selected the tool of your choice. You could then compare the level of memory alteration with just running the tool itself without the GUI and see whether there is a considerable difference.

As for how to compare the files… A quick and dirty way of doing this is to write a program or script that reads in a page of memory (4096KB), hashes it using MD5 or SHA1 and stores the hash. Once you have hashed all the pages in the vmem file you can do the same with your incremental vmems and monitor which pages have changed over time.

Dan Farmer and Wietse Venema do something very similar in their book, Forensic Discovery.

(Aside get it, it's a really interesting book and quite unlike most forensic books out there. It has lots about just performing forensics research than just step-by-step guides on how to do something. Read more at http//www.amazon.co.uk/Forensic-Discovery-Daniel-Farmer/dp/020163497X)

Bonus points If your MD5 hashes per page aren't granular enough, you can chunk it further… or run SSDeep over it to create a fuzzy hash and see how much that page has changed. Either way, by hashing and comparing the pages you can at least duplicate your experiment several times and say something like "approximately 3MB of memory is modified when this program is run using the following settings". Of course, this may vary from system to system but you get the idea.

I've been meaning to modify my testbed into a more advanced suite for a long time now, but sadly I don't have the time (

I hope this helped, or gave you some ideas. If you want to diff the files down to the byte level that is possible, but at least mull this suggestion over and see if it works out well for you.

- Tom


   
ReplyQuote
 twig
(@twig)
Active Member
Joined: 19 years ago
Posts: 9
Topic starter  

Thanks very much Tom, that is a great help!


   
ReplyQuote
keydet89
(@keydet89)
Famed Member
Joined: 21 years ago
Posts: 3568
 

Some thoughts…

First off, by default, on most Windows systems (non-PAE-enabled), the size of a memory page is 4KB, not 4MB.

Second, I don't see how useful getting a snapshot of memory and hashing each 4KB page would be, and here's why…

Many of the IR tools that you could run, and the ones I've found to be most useful, are CLI tools, and many of them complete their tasks and exit before you could snapshot the system manually, even in VMWare.

Next, consider this…when you load load a program and create a process, memory pages already in use by other systems will continue to change, as well. In a best case scenario, other processes on the system being examined will continue on their merry way, without any concern for your newly-created process.

Snapshotting all of memory and then hashing each 4K page may let you know how many pages changed between snapshots, but how do you then go about determining which pages were changed by *your* process?

Also, consider this…the MS memory manager uses "pools" of memory to manage items, such as network connections, the contents of the clipboard, etc., that do not require a full 4KB page for storage. So, if the state of a network connection changes between snapshots, then the entire 4KB page will change…won't it? After all, changing a single bit changes the hash for the entire field, be it a 4KB page or a file.

While I agree the impact a tool has on memory should be examined, I'm not entire sure exactly what aspect of it needs to be observed and documented. I have to spend some more time with Russinovich and Solomon's book but I suspect that using Perfmon to monitor the number of memory pages used by a process will be a better measure of "impact".

I would include along with that any files and/or Registry keys that are created, accessed or modified by the process.

Another thing one must consider is that running tests on a baseline system is fine, but the software load on that system needs to be documented, as well. What other software and services are installed? How much memory does the system have? It's one thing to measure the effect that a process has on it's environment, but what about the effect that the environment has on the process? This, my friends, is much harder to quantify because it will likely be different in every case.

…just my $0.02,
H


   
ReplyQuote
Page 1 / 3
Share: