Notifications
Clear all

Extract $J  

Page 1 / 2
  RSS
Cults14
(@cults14)
Active Member

Hello

I should maybe know this already, but I need to extract a $J file from a UsnJrnl file in a shadow copy and I'm struggling.

I have already mounted an image and used OS Forensics 4.0 to gain access to Shadow Copies, but saving the the UsnJrnl to local disk doesn't give me access to the $J

I believe that a more recent version of OS Forensics might have the capability but I'm not in a position to obtain the update (I am properly licensed!)

Thanks Peter

Quote
Posted : 12/12/2019 3:08 pm
minime2k9
(@minime2k9)
Active Member

I thought that the $usnjrnl (and by association $J ADS) were not included as part of the volume shadow copies.
If that is the case, then it should give you the same data as the one from the current system.

I take it you are looking to recover previous records from the $J file?

ReplyQuote
Posted : 12/12/2019 4:38 pm
joakims
(@joakims)
Active Member

I made a very simple tool to do just that. You can try it out; https://github.com/jschicht/ExtractUsnJrnl

ReplyQuote
Posted : 12/12/2019 7:56 pm
UnallocatedClusters
(@unallocatedclusters)
Senior Member

I made a very simple tool to do just that. You can try it out; https://github.com/jschicht/ExtractUsnJrnl

Chrome blocked the download the download from Github as "dangerous".

I have a UsnJrnl file I currently need to analyze myself. Forensic Explorer could extract file names from the UsnJrnl file but not associated metadata dates. OSForensics could not parse the UsnJrnl file at all (I have sent the file to David Wren of Passmark for help).

If your tool works, I will donate to your Venmo or PayPal.

ReplyQuote
Posted : 12/12/2019 8:40 pm
joakims
(@joakims)
Active Member

The tool is open source and not dangerous. It can do one thing and is good at it. To extract from a vsc you need the volume mounted so that the shadow is exposed through the OS symbolic link.

ReplyQuote
Posted : 12/12/2019 8:47 pm
minime2k9
(@minime2k9)
Active Member

I do have a tool that will recover all records from a disk, deleted and live.
Its usually used internally here, but I'll make it available to those who want it.
As a general rule, I get at least x2-x3 more records than from the live $J file.

For the moment, anyone who wants a copy message me on here. If anyone has a website and would like to host, I'll make it available.

ReplyQuote
Posted : 12/12/2019 8:47 pm
joakims
(@joakims)
Active Member

Those records normally exist in 5 different places. Active journal, vsc, unallocated, file slack and memory dumps. It may be nice though to know where a given record originated from. Is it a carver like this; https://github.com/jschicht/UsnJrnlCarver or something else? I'll send you a pm.

ReplyQuote
Posted : 12/12/2019 8:59 pm
thefuf
(@thefuf)
Active Member

Use dfir_ntfs ( https://github.com/msuhanov/dfir_ntfs ) to mount every shadow copy, then use fls & icat (The Sleuth Kit) to extract the $J data. Optionally, use dfir_ntfs again to parse the $J data.

ReplyQuote
Posted : 12/12/2019 9:13 pm
Passmark
(@passmark)
Active Member

The USN journal file has multiple NTFS file streams. So I am not sure in the value of extracting just the $J stream by itself.
The other stream in the same file, $MaxData, contains important stuff like. Maximum Size, Allocation Delta, USN ID (a) & Lowest Valid USN. See,
https://flatcap.org/linux-ntfs/ntfs/files/usnjrnl.html

So the $MaxData information is pretty important if you want to do anything with the $J stream.
Also to make sense of the $J stream you also need the corresponding MFT (in order to work out the file names for each record).

In addition to having multiple NTFS streams the $UsnJrnl file is a sparse file (the size on disk is smaller than the size of the file). So any attempt to extract just the $J stream also needs to take this into account. Do you want it sparse, or not, once extracted?

OSF V7 can carve out the $J stream if you want, but for what purpose?

I think it might make more sense to just extract the entire $UsnJrnl file from the image. With both streams intact and the sparse attribute intact. If you need help with this let me know.

From V5 of OSF there was also a built in $UsnJrnl Viewer. But that won't help you if you are stuck on V4. (unless you use the V7 trial)

ReplyQuote
Posted : 13/12/2019 6:19 am
joakims
(@joakims)
Active Member

Just a tiny correction.

Also to make sense of the $J stream you also need the corresponding MFT (in order to work out the file names for each record).

You don't need MFT to work out filenames of a given usnjrnl record, as all such records already contain the filename.

Paths would be helpful to join though. But even paths are not that straight forward to attach to the data set as directory structures may have changed before the snapshot was taken. In that sense usnjrnl records actually contains enough information to build a partial path (or temporary) that would yield a more accurate representation of current path for a given object than MFT alone could, if the relevant records are captured in the journal.

ReplyQuote
Posted : 13/12/2019 6:15 pm
minime2k9
(@minime2k9)
Active Member

I did try getting my tool to re-create the file path. The issue is, that with the 2,000,000 records that were usually extracted, it took ages!

I may look at re-doing this in the future or a newer version of my tool.

ReplyQuote
Posted : 13/12/2019 6:18 pm
joakims
(@joakims)
Active Member

I made a proof of consept some time ago for rebuilding paths. Ended up making a separate program for it and making use of mariadb. But there are some challenges with doing something like that, for instance renamed directory.

ReplyQuote
Posted : 13/12/2019 6:33 pm
UnallocatedClusters
(@unallocatedclusters)
Senior Member

The tool is open source and not dangerous. It can do one thing and is good at it. To extract from a vsc you need the volume mounted so that the shadow is exposed through the OS symbolic link.

I believe you that the tool is not malware - it is the first time the Chrome browser itself has blocked a download on the laptop I attempted to download the zip file (weird).

I downloaded the desktop client version of GitHub - maybe that will let me "clone" the repository if I am using the correct terminology.

ReplyQuote
Posted : 13/12/2019 8:49 pm
joakims
(@joakims)
Active Member

I downloaded the desktop client version of GitHub - maybe that will let me "clone" the repository if I am using the correct terminology.

Either that. Or download the zip using another browser. Or compile the au3 source yourself.

The usual annoyance wrt AV is that these type of compiled exe's are backlisted by default by less sophisticated AV.

ReplyQuote
Posted : 13/12/2019 9:05 pm
Passmark
(@passmark)
Active Member

I did try getting my tool to re-create the file path. The issue is, that with the 2,000,000 records that were usually extracted, it took ages!

After some playing around optimising the process, we got up to around 100,000 path lookups per second in OSF. Of course it depends a lot of the hardware and the image format.

ReplyQuote
Posted : 15/12/2019 10:14 pm
Page 1 / 2
Share: