Memory Acquisition ...
 
Notifications
Clear all

Memory Acquisition Tools - Occupied RAM

Dan_Forensics
(@dan_forensics)
New Member

Hi

I am conducting some tests to determine the efficacy of open-source memory acquisition tools. Obviously a pretty pertinent performance metric is the amount of RAM a specific tool occupies whilst the acquisition is being performed. After looking at some academic journals, I have seen some individuals record only the amount of private bytes being attributed to a tool. Others I have seen them record working set/working set peak, virtual bytes and virtual bytes peak and pagefile bytes/page file bytes peak.

I'm wondering as for why there is such a disparity in testing and what metrics I should include which would give a true reflection of how much RAM is being occupied by a tool. I hope this makes sense.

Any information would be great.

Thanks

Quote
Topic starter Posted : 19/03/2019 6:20 pm
Passmark
(@passmark)
Active Member

Windows memory management is complex and poorly understood by nearly everyone. Thus the differences in what is reported.

As to how important the memory foot print is, it really depends on what you are looking for.

If you are looking at the RAM usage of just a particular process, then it doesn't matter what the other processes are doing or how much RAM they use. Process A won't change it's memory usage based on what process B does.

If you are looking at what all the active processes are doing (and dumping physical RAM), then again it doesn't matter so much what the tool's foot print is. Sure some of the processes might be forced to swap some memory pages to disk, but then the same data is in the paging file. Nothing is actually lost.

The case where is does matter is when you are looking for data in free RAM, that isn't currently in use by any process. (e.g. dumping all physical RAM and then just doing a grep for strings)

ReplyQuote
Posted : 19/03/2019 9:23 pm
athulin
(@athulin)
Community Legend

I'm wondering as for why there is such a disparity in testing and what metrics I should include which would give a true reflection of how much RAM is being occupied by a tool.

The reason for different metrics … may depend on OS platform, as well as what instrumentation services it offers. For most practical use, the metric you ask about – occupied RAM – is not relevant.

The closest would be working set size, or possibly non-shared working set, depending on if you want shared libraries/other code to be included or excluded. (not all platforms provide the latter).

But a working set size is measured over some particular time under *this* second the working set size may be X, under *that* second it may be 0 … because the process got entirely paged out, perhaps because it's waiting for I/O to complete, and other processes needed memory to execute.

So you also need to consider if you measure peak size or some type of averaged size, and over what period of time entire lifetime, or what? And you looking at system time or process time?

All this is closely related to system tuning, where much of this and other system measurements are important. You may need to refer to system documentation related to that particular topic to make entirely certain that you are measuring the right thing.

ReplyQuote
Posted : 20/03/2019 8:49 am
Belkasoft
(@belkasoft)
Active Member

Hi

I am conducting some tests to determine the efficacy of open-source memory acquisition tools. Obviously a pretty pertinent performance metric is the amount of RAM a specific tool occupies whilst the acquisition is being performed. After looking at some academic journals, I have seen some individuals record only the amount of private bytes being attributed to a tool. Others I have seen them record working set/working set peak, virtual bytes and virtual bytes peak and pagefile bytes/page file bytes peak.

I'm wondering as for why there is such a disparity in testing and what metrics I should include which would give a true reflection of how much RAM is being occupied by a tool. I hope this makes sense.

Any information would be great.

Thanks

When are you going to publish your results? Are you testing Belkasoft Live RAM Capturer (https://belkasoft.com/ram-capturer)?

ReplyQuote
Posted : 21/06/2019 6:13 am
steve862
(@steve862)
Active Member

Hi,

I've made a couple of observations and assumed it was mostly, or entirely Windows based computers you were looking at.

Whilst using a tool with the smallest possible footprint is definitely the right way from a theory point of view, I would sacrifice some of that for two other things; reliability and ease of use.

From the reliability point of view I know a prominent training provider used to teach collecting RAM last when examining a live system, because they said the RAM collection tool could crash the computer. The tool they were recommending was one I never use because I don't trust it. I've used two tools over the years on at least 200 live suspect systems and never had a problem.

From the ease of use point of view it would be very bad if the RAM dump accidentally got saved to the system being examined. I've been to a job where I had to capture the RAM from about 20 computers under tight time constraints. I've had pressured situations, for example suspects still present whilst I am examining their devices, or significant health and safety considerations dressed in full PPE and at temperatures about 90F. In those situations I want something foolproof.

A tool which automatically saves the dump back out to the harvest drive from where the tool was executed reduces risk. There are plenty of tools out there where you have to browse to where you want to place the dump. As you have one unrepeatable go at a live system, you definitely want reliability for the tools/approaches you deploy.

Steve

ReplyQuote
Posted : 25/06/2019 5:30 pm
marky.mark
(@marky-mark)
New Member

… The tool they were recommending was one I never use because I don't trust it. I've used two tools over the years on at least 200 live suspect systems and never had a problem.

Hi Steve,

I hope you are doing good.

Can you tell us what is the tool you would never use and the two tools you used over the years that are reliable?

Thank you.

M.

ReplyQuote
Posted : 25/06/2019 8:57 pm
Belkasoft
(@belkasoft)
Active Member

Hi,

From the ease of use point of view it would be very bad if the RAM dump accidentally got saved to the system being examined.

Steve

Belkasoft Live RAM Capturer automatically suggests to save to the same media from where it has been run. Since it is always a thumb drive of an investigator, it is fool proof.

ReplyQuote
Posted : 26/06/2019 5:02 am
steve862
(@steve862)
Active Member

Hi,

So in answer to the question of which tool do I not use, the quick answer is I don't use DumpIt. I know a lot of people use it, but these include several people who have told me it has caused the computer under examination to 'crash or 'lock up'. In my own testing it failed to collect the RAM once, but was successful after a second try on the same PC. For me to exclude a tool I only need a little bit of risk if there is another tool that presents lower risk.

RAM Capturer has become my go to tool for just RAM collection and it has not failed me once, yet.

I did used to use Helix in the early days, circa 2007/8 and I used FTK Imager Lite for quite some time. I've used a handful of other data collection tools in different types of jobs and there are too many to name.

I stopped using FTK Imager Lite for RAM capture some years ago. It had been perfectly reliable but I would rather use a tool that just saves the dump back to the folder in which the tool is stored, as opposed to a tool that asks me to choose where to put it.

Steve

ReplyQuote
Posted : 26/06/2019 4:31 pm
marky.mark
(@marky-mark)
New Member

Hi Steve,

Thank you for the info, i will try RAM Capturer when i have the chance.

Have a good day.

M.

ReplyQuote
Posted : 26/06/2019 6:26 pm
Share:
Share to...