±Forensic Focus Partners

Become an advertising partner

±Your Account


Forgotten password/username?

Site Members:

New Today: 1 Overall: 36750
New Yesterday: 4 Visitors: 159

±Follow Forensic Focus

Forensic Focus Facebook PageForensic Focus on TwitterForensic Focus LinkedIn GroupForensic Focus YouTube Channel

RSS feeds: News Forums Articles

±Latest Articles

±Latest Videos

±Latest Jobs

Pitfalls of Interpreting Forensic Artifacts in the Registry

Discussions related to Forensic Focus webinars. Please use the appropriate topic for each webinar.
Reply to topicReply to topic Printer Friendly Page
Forum FAQSearchView unanswered posts
Page Previous  1, 2, 3, 4, 5, 6, 7, 8  Next 

Senior Member

Re: Pitfalls of Interpreting Forensic Artifacts in the Registry

Post Posted: Nov 02, 12 04:59

I agree...those who can use the scripts will. However, what the scripts really achieve is a greater level of correlation, not so much interpretation; that is to say that while the scripts bring a lot of information together in one location, the user still has to memorize or look up what each time stamp listed in the USB report means.

My question regarding the user base stems from comments I've heard from analysts that Linux and tools such as SIFT are "too hard". Sure, people will use your scripts, without a doubt...but there's a huge user base out there that ONLY uses EnCase and/or FTK, meaning that they run on Windows systems and therefore cannot use your scripts without installing Cygwin.

After reading through your dissertation, I can see that the end result is a greater level of correlation, along with some valuable research that aids in the interpretation of information available from the Registry...however, I'm having some difficulty seeing where "automated interpretation" was achieved, as analysts still need to know (memorize or look up) to what the various time stamps in the USB report correlate.

Some other comments:

Pg 23 - James' Perl module is misidentified throughout the document; first, it is "Parse::Win32Registry". Second, it is a Perl module that includes example scripts as part of the distribution; it is not "a suite of Perl scripts". It would be more correct to say that it "includes" a suite of Perl scripts.

pg 38 - "This is a good demonstration of the need to analyse a system as a whole rather than just focusing on one aspect." <- excellent point, couldn't agree more.

pg 52 - "The output generated from regripper is considerably larger and requires manual correlation..." <- this is the result of a design decision early on in the development of RegRipper, as a number of responses were, "...show me everything, let me decide what is important...". Something similar is seen on pg 57, with the comment "...there are also advantages to using an integrated approach, namely the ability to provide greater correlation and automated interpretation." This is true, but from a design perspective with respect to RegRipper, this is not what folks asked for early on in the development. Also, while the tools described in the paper do provide a higher level of correlation, I'm not sure that I see how they provide for "automated interpretation", per se.

pg 53 - "The output from ssid.pl could be reported by network instance rather than as a list." Point taken...but RegRipper is open source and rather than writing an entirely new tool, you can simply change the plugin, or ask someone (me) to do so.

pg 57 - "Every effort has been made to make sure the tools produce output that would be acceptable in court." Interesting statement, as I'm not sure what it means, exactly.

Again, I greatly appreciate your time and effort in this endeavor, as well as the fact that you made the results of your efforts publicly available. Thank you.  


Re: Pitfalls of Interpreting Forensic Artifacts in the Regis

Post Posted: Nov 02, 12 17:41

You must have read the whole document, I am very pleased that you have taken the time out to read my research.

One of my main aims was to achieve a greater degree of correlation so I'm very pleased that you think I achieved this.

With regard to "interpretation" I use this is in the stricter sense of the word, for example. If I find in the registry that a vendor ID is stored as 9999, I can choose to report this or to translate/interpret that information into something more human readable like company xyz. In doing so I am not strictly reporting what is found in the registry but an interpretation of that data. This is why I produce two logs one of the screen dump and an expert log detailing the original data. In the appendices I detail any interpretations made so that an examiner can fully explain these interpretations if required to do so in court.

My apologies to James McFarlane if I have misidentified any of his brilliant work. I see that he has a new version available too.

When I set out to do my research I did consider using or writing up new modules for regripper but I decided to write my own scripts for several reasons. One the main ones being that I wanted to do a deep dive into this area and come at it from a different angle which I hope that I did. My tools are currently an individual effort rather than a collaboration and absolutely will have limitations as all tools do. My hope is that analysts will see fit to use them alongside some of the other great tools out there and that some of my research can be used to help develop tools that are yet to come.

It is a great pleasure for me that so many people are showing interest in the research I have done and that it's not just gathering dust somewhere.

I thank you again for all your comments



Senior Member

Re: Pitfalls of Interpreting Forensic Artifacts in the Registry

Post Posted: Nov 02, 12 17:53

Unfortunately, one of the things we see in the "community" is folks who write tools wanting to go their own way, rather than expand upon one of the currently available toolsets or frameworks.

Again, thanks for your work.  

Senior Member

Re: Pitfalls of Interpreting Forensic Artifacts in the Registry

Post Posted: Nov 02, 12 23:50

Something interesting that I thought I'd share...

Based on testing of a shellbag parsing plugin, I have seen the information provided for shell items of type 0x2e, which appear to be devices. So I modified the plugin to print out the hex content of the structure after that structure was parsed, so that I could see what the four devices listed in one hive file "looked like."

Of the four devices (labeled A - D), only device D appeared in the USBStor key, and devices A - C appeared in the USB key. All four devices appeared in the Windows Portable Devices key, and only device D appeared in the EMDMgmt key.

I think that what this shows is that you can't start at the USBStor key as your initial point and work from there; if you do, what you need to do is get all of the information and after you've completed the process of correlating it for specific devices, look at what you have left over.

Also, the MountPoints2 subkeys are a very good source for tying a device to a logged on use, but the shellbags can be, as well.  


Re: Pitfalls of Interpreting Forensic Artifacts in the Regis

Post Posted: Nov 03, 12 03:54

That is indeed interesting. I did quite a lot of work around analysing the order of key population on insertion of a USB storage device. I only included a selection in my thesis as I had to cull some research from it. I set up several virgin systems under XP, Vista & Windows 7 and in all cases when I installed a USB storage device there was an entry in USBStor. Out of interest in the hives you have is a serial number recorded anywhere or do you believe that a USB storage device was installed without a serial number being registered? If they are sample hives rather than a users actual hives I would love look at them.



Senior Member

Re: Pitfalls of Interpreting Forensic Artifacts in the Regis

Post Posted: Nov 03, 12 16:18

This was a very interesting piece of research -- thank you for sharing it in this way. Minor theses can be very useful, yet are often difficult to identify.

A question:

In section 2.5.2 you state that "Parse::Win32Registry has proved accurate and reliable in the field", yet I find nothing to back up that point of view. What is it based on?

A possibly irrelevant note:

One interesting question that is peripheral to your paper, but which affects it, is that of identification and interpretation of traces. You report a number of points where current interpretation is at odds with your own test observations. Most current findings are not clearly based on any kind of scientific discovery process, so it should probably not be surprising to find such inconsistencies. But it does raise the question: how can identification of traces be done ... well, more systematically and less prone to personal factors influencing the result? And how can the testing of those traces be comprehensive enough to provide a reliable interpretation of the trace data? (Here's where it may be possible to get an error rate, as requested by the Daubert criteria.)

I don't expect you to have an answer -- but it is something I had almost hoped to see mentioned or acknowledged in the final chapter, as it is part of the foundation on which your tools were built.

It's a problem in just about any field of study -- see for example Arbesman: Truth decay: The half-life of facts in New Scientist (September 25, 2012), which starts by observing that the number of human chromosomes were, in 1912, reasonably authoritatively asserted to be 48 ... until 1956 when fresh counting and recounting did not identify more than 46 of them. In computer forensics the situation is even worse ... while human 'system architecture' seems fairly static, software manufacturers may introduce changes at almost any time, particularly in undocumented areas.  


Re: Pitfalls of Interpreting Forensic Artifacts in the Regis

Post Posted: Nov 03, 12 18:23

Hi Athulin,

In response to your question about Parse::Win32Registry, it is used by regripper which I know has an active field user base. I also did a lot of testing directly with Parse::Win32Registry, comparing it's output to what was in the registry and I found it to be accurate and reliable.

With regard to your second note it is far from irrelevant and I did do more research around this (I didn't include in my paper as it was already quite large). It is an area that I have considered for further research, the idea of "at what point or if ever is data real or is it always just someones interpretation of what is real". In relation to making the process of tracing data more scientific I think this is a big but very worthwhile question. With regard to my own tools, as a starting point I preserved the source, logged everything, tested my output and documented any manipulations performed. I took this approach so that any conclusions or reports could be explained within current understanding and possibly re-visited in the future based on new developments.

Thank you for your comments and pointer to the New Scientist article.

Regards Jacky  

Page 3 of 8
Page Previous  1, 2, 3, 4, 5, 6, 7, 8  Next