±Forensic Focus Partners

Become an advertising partner

±Your Account


Username
Password

Forgotten password/username?

Site Members:

New Today: 0 Overall: 35984
New Yesterday: 7 Visitors: 221

±Follow Forensic Focus

Forensic Focus Facebook PageForensic Focus on TwitterForensic Focus LinkedIn GroupForensic Focus YouTube Channel

RSS feeds: News Forums Articles

±Latest Articles

±Latest Videos

±Latest Jobs

mft2csv - NTFS systemfile extracter and $MFT decoder

Forensic software discussion (commercial and open source/freeware). Strictly no advertising.
Reply to topicReply to topic Printer Friendly Page
Forum FAQSearchView unanswered posts
Page Previous  1, 2, 3, 4, 5, 6, 7, 8, 9, 10  Next 
  

Ddan
Member
 

Re: mft2csv - NTFS systemfile extracter and $MFT decoder

Post Posted: Dec 30, 11 10:19

I'm stretching my memory here.

I don't think there are two ways to determine compression or not. You only look at the first two bytes when the compression flag is set and when the data run shows 16 cluster sets containing sparse data, ie you need both. Otherwise the first two bytes are just data.

When the data run indicates compression though, the first two bytes, little endian format of course, are treated as 4 bits plus 12 bits. If the highest bit is set, then we actually have compression. My experience, mainly with XP, is that the 16 bit word is either &HBxxx or &H3FFF for a cluster size of 4096. The &HB indicates compression, the &H3 indicates no compression.

The remaining 12 bits indicate the length of the compressed sub-block minus 3 bytes. The length includes the first two bytes. So &H3FFF gives an uncompressed block of &H1002 bytes. That is, two more than the uncompressed length. Also for a full-size compressed block, the (xxx +3) bytes will expand to 16 clusters.

One other comment in regard to the &HB (ie 1011 in bits) is that no-one seems to know what the lower two set bits mean. It seems they are always set.

Ddan  
 
  

Ddan
Member
 

Re: mft2csv - NTFS systemfile extracter and $MFT decoder

Post Posted: Jan 01, 12 07:47

I knew I should not have relied on my memory to talk about compression. I didn't quite get it right.

The 16 clusters are called a 'compression unit' and the compression is actually done on a 'compression block' which is always 4096 bytes (or less if end of file).

So for a small disk with cluster size 512 bytes, a compression unit is only two blocks. A large disk with cluster size 4096 bytes has 16 blocks.

The 16 bit header is always &HBxxx or &H3FFF, irrespective of cluster size.

Hope this clarifies everything.

Ddan  
 
  

joakims
Senior Member
 

Re: mft2csv - NTFS systemfile extracter and $MFT decoder

Post Posted: Jan 16, 12 03:02

@Ddan
Thank you once more for your explanantions! They are much appreciated.

I just made some updates:

mft2csv v1.6
Added fixups. However hardcoded to handle record size of 1024 bytes.

@Ddan
Do you know the formula for getting record size?

NTFS Systemfile extracter v1.7
I am a bit stuck with the extraction of compressed data. I believe I have understood most of it now, but am facing weird issues with the extracted data. For instance is data correctly extracted up until a certain run, but after that the correct data is in fact extracted but appear corrupted in that 1 byte of random is added to the extracted data at arbitrary locations several places. I simply do not understand what is going on, and don't have much time to investigate this issue further. Therefore I post the most current version in case someone is curious and wants to look at it. The relevant code for extracting the compressed data is at around line 1060 and 1100. Furthermore, I acknowledge that the implementation of runs could have been done differently and possibly better, by doing the arrays smarter.
_________________
Joakim Schicht

github.com/jschicht 
 
  

Ddan
Member
 

Re: mft2csv - NTFS systemfile extracter and $MFT decoder

Post Posted: Jan 17, 12 06:15

I've never seen any size other than 1024 bytes. Maybe some earlier versions of NTFS, prior to XP, were different.

The formula uses the int32 value( ie signed) at offset &H40 in the boot record. This is the number of clusters/MFT record. A negative number indicates that the size is less than a cluster. The formula is 2^(-1*int32). The normal value is &HF6 or -10, so 2^(-1*-10)=1024.

Having said that though some lateral thinking makes it a bit more obvious. It's probably a safe assumption that for any drive or image, the MFT record size is fixed. Also we know that Fixup is done on a sector by sector basis. If you look at the MFT record for the $Mft, the size of the Fixup array (aka Update Sequence array) is at &H6. This is always one more than the number of sectors to be fixed up! So for the usual value of 3, the record must be 2 sectors long ie 1024 bytes. You can bring this into perspective by looking at an INDX record. It's Fixup array size is usually 9, so it's size is 8 sectors ie 4096 bytes.

The latter approach is good particularly when the boot record is missing or damaged.

Ddan  
 
  

joakims
Senior Member
 

Re: mft2csv - NTFS systemfile extracter and $MFT decoder

Post Posted: Jan 27, 12 03:48

By accident the wrong version of mft2csv was uploaded last time, with an option to detect record slack. That was never really implemented because I believed it was too timeconsuming to process with lots of false positives.. But while at it, would it make sense to implement with an option to detect such?

The way I see it, each and every byte between the attribute end marker (0xFFFFFFFF) and offset 0x3FE of record, must be compared against 00. And if implemented, would it make sense to dump slack data into a subfolder using a naming convention of [IndexNumber]_[FileName].bin or something similar?
_________________
Joakim Schicht

github.com/jschicht 
 
  

joakims
Senior Member
 

Re: mft2csv - NTFS systemfile extracter and $MFT decoder

Post Posted: Jan 31, 12 05:18

Added a console application named MFTRCRD that dumps the same information as mft2csv, but to the console. Much faster when just looking at stuff for 1 particular file at the time, like when you're testing and experimenting.
_________________
Joakim Schicht

github.com/jschicht 
 
  

CyberGonzo
Senior Member
 

Re: mft2csv - NTFS systemfile extracter and $MFT decoder

Post Posted: Feb 06, 12 16:18

@Ddan

I'm looking into your fixup explanation. Your example is for 1024 byte MFT records and 512 bytes per block. Is it fair to assume that the amount of fixup words (in your example 3) will always be 'amount of blocks in an MFT record' + 1 ?

PS. We're discussing NTFS in thread:
www.forensicfocus.com/...pic&t=8702
Feel free to chime in.  
 

Page 4 of 10
Page Previous  1, 2, 3, 4, 5, 6, 7, 8, 9, 10  Next