Notifications
Clear all

Live Forensics

25 Posts
6 Users
0 Reactions
2,306 Views
(@echo6)
Trusted Member
Joined: 21 years ago
Posts: 87
 

I would be interested in reading your thesis

PM me your details and I shall see what I can do )


   
ReplyQuote
(@tgoldsmith)
Eminent Member
Joined: 19 years ago
Posts: 35
 

Thanks for replying Keydet.

First off, by default, on most Windows systems (non-PAE-enabled), the size of a memory page is 4KB, not 4MB.

My mistake, I meant 4096 bytes. Thanks for pointing that out.

Many of the IR tools that you could run, and the ones I've found to be most useful, are CLI tools, and many of them complete their tasks and exit before you could snapshot the system manually, even in VMWare.

That's why you do the snapshots before and after you run a suite of tools, like Windows Forensic Toolchest. It can be a handy indicator and at the very least a fairly interesting experiment. You can tweak the WFT config files to run different apps to see how much memory gets changed using different settings. Besides, even when the application has exited you can get some feel of its impact on the system - the process will allocate memory (maybe displacing other processes memory, which is what we are worrying about after all) and then freeing it. You won't always get that freed memory being allocated again straight after.

I'm pretty sure if you wanted to you could use VMWare's scripting abilities to run your program of choice then snapshot anyway.

Snapshotting all of memory and then hashing each 4K page may let you know how many pages changed between snapshots, but how do you then go about determining which pages were changed by *your* process?

You don't by just comparing the hashes, but it's a best-effort scenario. You can repeat the experiment multiple times to get a feel of its impact on the system.

Also, consider this…the MS memory manager uses "pools" of memory to manage items, such as network connections, the contents of the clipboard, etc., that do not require a full 4KB page for storage. So, if the state of a network connection changes between snapshots, then the entire 4KB page will change…won't it? After all, changing a single bit changes the hash for the entire field, be it a 4KB page or a file.

Well yes, unless you chunk system pages in smaller amounts (not a great idea as you mentioned, you don't know what has changed) or use fuzzy hashing. Hashing the pages will still give you an indication of the pages that have changed over time, which you can then analyse independently to see what has changed within them.

I have to spend some more time with Russinovich and Solomon's book but I suspect that using Perfmon to monitor the number of memory pages used by a process will be a better measure of "impact".

If you are considering CLI tools as you mentioned before, I think it's not that much better, if at all. You need to run Perfmon and lock it to monitoring certain processes, which as you mentioned would terminate before you could do that.

I would include along with that any files and/or Registry keys that are created, accessed or modified by the process.

100% agree with this, in fact it's more useful than working out memory displacement, but that's not what twig asked so I didn't touch it.

Aside from that, I really would recommend reading the section on memory usage in Forensic Discovery - it illustrates the point rather well. Many people think that the contents of memory is this seething turmoil of memory pages changing over time, when in fact a lot of the contents of memory is rather static. Performing tests like this is an interesting experiment because you can see really how little impact running tools has on the system overall, rather than trying to give an absolute amount to what effect they produce themselves.

Plus, it gives twig some ideas, which is what he asked for )

- Tom


   
ReplyQuote
hogfly
(@hogfly)
Reputable Member
Joined: 21 years ago
Posts: 287
 

This is a pretty popular topic these days isn't it.

I'm currently developing a methodology to do just what twig is asking about - impact and a number of other things for live response tools.

Some thoughts for capturing memory and analyzing it.

Capture memory from a baseline image
Wrap the tool to pause post execution (Harlan did this a few years ago in his blog in fact…)
Capture memory again

Impact analysis
Use windbg to look at the mem pools for each process, and pages consumed.
perhaps execute the tool from within windbg
windbg is scriptable
script the execution of the tool and wrap it with a perfmon script (if you baseline ahead of time, you'll already know which processes you need to watch…)
Check out the detours API from Microsoft - There's a free edition
You'll also need to look at virtual memory in case memory is paged out to swap.


   
ReplyQuote
keydet89
(@keydet89)
Famed Member
Joined: 21 years ago
Posts: 3568
 

To tgoldsmith and hogfly primarily, and to others in general…

I guess I'm just not seeing the need or efficacy of determining how much total memory changes when a tool is used on a system. Most CLI tools perform their function quickly and then terminate, thereby freeing the memory pages they've used for use. Measuring memory displacement may be a good idea…but what does it lead to?

My reasoning is this…active processes and threads in memory will have memory pages that they are actively using right there in memory with them. This, of course, changes with time…this is the very nature of volatile memory. If a new process is introduced into memory *and* there is a need for additional space to support that process, then pages will be swapped out to the pagefile, where they can still be accessed. However, those pages that are actively used by a process/thread will not be deleted or overwritten. This does not happen until the process/thread has freed the memory for use.

So…the next question is, if introducing a new process to capture the contents of physical memory has the effect of potentially moving memory pages for other active processes to the pagefile, and potentially overwriting memory pages that have already been freed for use, how does this affect our investigation?

Does this mean that "evidence" has been tromped on?

I guess I'm just trying to understand why it's so important to determine the total amount of memory changed when a new process is introduced, particularly if we know that regardless of how many tests we run in the lab, the first time we do this in the real world, it will be different. Why is this one measure so important?

H


   
ReplyQuote
(@tgoldsmith)
Eminent Member
Joined: 19 years ago
Posts: 35
 

Why is this one measure so important?

In my opinion, it isn't that important, I was just trying to help out twig -) As you mentioned, it's far more important to know what files and registry keys are modified, so you can go back and correct (or at least take into account the changes) timelines and the like when you are looking at the disk image. Of course, this is nice and easy to do reliably too.

I think its still worthwhile to talk about these things so other people think about them. There are a lot of students on this board and I remember back in the day that getting ideas for study was really helpful. There is no reason *not* to do something if you learn a little more about the subject in the process..

Oh, and nice ideas hogfly! You could certainly get some interesting results out of that. I might have a play sometime -)


   
ReplyQuote
keydet89
(@keydet89)
Famed Member
Joined: 21 years ago
Posts: 3568
 

> I think its still worthwhile to talk about these things so other people think about them.

Agreed. This is an area that does need to be addressed.

I like Hogfly's ideas, as well, although in my mind the same questions apply…what do Hogfly's tests hope to show? I see that he starts one section with "Impact Analysis", but again…is this the right way to define "impact"?

H


   
ReplyQuote
hogfly
(@hogfly)
Reputable Member
Joined: 21 years ago
Posts: 287
 

While I'm not ready to tip my hand completely since I'm still developing the methodology, I will say that I'm defining impact and impact analysis as the following
proposed activity Execution of a utility under circumstances warranting live response
Impact any effect caused by a proposed activity
Impact analysis analysis and measurement of any measurable effect caused by a proposed activity.

Being able to determine relevance (as in what specifically we need to look out for when it comes to memory) will come after we gain a deeper understanding of what our tools are actually doing.

So, what does this translate to?
For the time being, measuring the impact of a tool on Memory, File system, Registry, and network states.

>I guess I'm just not seeing the need or efficacy of determining how much
>total memory changes when a tool is used on a system. Most CLI tools
>perform their function quickly and then terminate, thereby freeing the >memory pages they've used for use. Measuring memory displacement
>may be a good idea…but what does it lead to?

I'm with you on this. While it may not be a huge impact if a tool displaces 4k, if our entire live response procedure displaces up to 256MB of memory, we need to know that because a lot of data is contained in 256MB, some of which could easily be argued contained exculpatory evidence. And while the contents of memory may still be there since things are paged to swap, what are we overwriting in swap by paging out to it?
<insert theory since not tested yet>If a tool that dumps event logs needs to be run three times to dump each of the 3 major event log types(app,sec,sys), thereby consuming 3 times the memory pages, maybe some proof of that will lead to a more efficient multi-threaded approach that consumes less memory</theory>

This leads to a few things in my mind, some of which are academic.

1) a deeper understanding of memory management
2) It will assist in answering the questions that will undoubtedly be raised about memory analysis and capturing memory from a live system. I anticipate there will be a lot of junk science attacks on memory capture and analysis as it becomes more prevalent.
3) We must measure impact on sources of volatile data in order to gain an understanding of what actually needs to be collected, to determine what tools will do what we need, what tools are the most efficient and accurate, and by doing so, we can validate tools that are used for live response but were not designed for live response.


   
ReplyQuote
keydet89
(@keydet89)
Famed Member
Joined: 21 years ago
Posts: 3568
 

> 3) We must measure impact on sources of volatile data

Agreed, and this is something we keep coming back to…as if we're stuck in a tarpit.


   
ReplyQuote
keydet89
(@keydet89)
Famed Member
Joined: 21 years ago
Posts: 3568
 

> I really would recommend reading the section on memory usage in
> Forensic Discovery - it illustrates the point rather well.

Reading through it again, it occurs to me that the operating systems used are variations of *BSD and Linux, which have different memory management architectures from Windows. I'm not suggesting that things would be vastly different…I'm suggesting that we'd need to determine some means of replicating the experiments in the FD in order to see how applicable that data is to Windows…

H


   
ReplyQuote
keydet89
(@keydet89)
Famed Member
Joined: 21 years ago
Posts: 3568
 

> …answering the questions that will undoubtedly be raised…

I'm not disagreeing with you, but what I would like to ask is…what are those questions?

It occurs to me that I really can't find anyplace where a direct question about this has been documented…with all this talk about answering questions, I'm curious as to what that question (or those questions) would be…

H


   
ReplyQuote
Page 2 / 3
Share: