mft2csv - NTFS syst...
 
Notifications
Clear all

mft2csv - NTFS systemfile extracter and $MFT decoder

68 Posts
9 Users
0 Reactions
15.2 K Views
 Ddan
(@ddan)
Eminent Member
Joined: 14 years ago
Posts: 42
 

God, I am having a bad day. I think you are right.

Just looked at a few records and see you are probably talking about the 0x82794711 string? Is that the same as on your system? I don't have an explanation for that.

Ddan


   
ReplyQuote
joakims
(@joakims)
Estimable Member
Joined: 15 years ago
Posts: 224
Topic starter  

Yes it is the god damn 82794711 I am talking about wink (quite anoying..)


   
ReplyQuote
 Ddan
(@ddan)
Eminent Member
Joined: 14 years ago
Posts: 42
 

Just to stir the pot a little, I notice that the same four bytes occur in RCRD records as well!!!

Ddan


   
ReplyQuote
 Ddan
(@ddan)
Eminent Member
Joined: 14 years ago
Posts: 42
 

Hi Joakim,

I wonder if you can help me. I've been playing around with your sourcecode for NTFS_Sysfiles_Extracter_v1.8 and I seem to be having some sort of problem. When I compile it, it gives a different exe size than the exe that you supply. Also my exe does not produce the same output for an extracted file as your exe does. The file in question is a compressed file and neither exe produces the correct output

I don't think it is simply different versions of Autoit, but in case it is, I am using Autoit v3.3.6.1 and version 3.7 (for version 3.3.6.1) of WinApiEx.

Is is possible that you actually compiled a different version of sourcecode for v1.8?

Ddan


   
ReplyQuote
joakims
(@joakims)
Estimable Member
Joined: 15 years ago
Posts: 224
Topic starter  

I will have a look at the versions. If I remember correct they are both the newest, but will double check. Either way, the compression part was never completely solved, despite your nice explanation. If I remember correct there, I was only able to decompress the first 4 Kb or something.

Edit
In lack of time for followig this up this week, I think only the latest versions were compiled with version 3.3.8.0. Version 3.3.8.1 is now the latest. Additionally, the version of WinApiEx it was compiled with, was an earlier version than latest (updated on 25. March). And is probably the reasons for differing file sizes. Don't worry about that, as you can compiled them new yourself. Let me know if you found any errors with the code (likely some)..


   
ReplyQuote
joakims
(@joakims)
Estimable Member
Joined: 15 years ago
Posts: 224
Topic starter  

There have been some major improvements on the tools; http//code.google.com/p/mft2csv/downloads/list

NTFS File Extracter v3.0
Added full support for compressed and sparse files.
$ATTRIBUTE_LIST solved, meaning extremely fragmented files can nwo be extracted.
Also support extraction of all ADS's tied to a given file.
Code reorganized for easier reuse.

MFTRCRD v6
Added support for specifying record number as parameter.
$ATTRIBUTE_LIST
Option to dump individual attributes as nicely formatted hex.

mft2csv v1.7
Just some smaller fixes.

Many thanks to DDan for the effort put into the new NTFS File Extracter.

That code has a lot of potential now.


   
ReplyQuote
jaclaz
(@jaclaz)
Illustrious Member
Joined: 18 years ago
Posts: 5133
 

Hi Joakim,
just had an occasion to try the nice mft2csv thingy.

All went well D , really nice and handy.

I have a small feature suggestion, I tested the thingy on a half-@§§edly extracted bunch of sectors that included a few sectors before the actual $MFT (and a few after it).
The program failed with an error.
So, I stripped the first few unrelated sectors and eveything went well.

Maybe it would be an idea to parse the file for first occurrence of "FILE0" (or "FILE*") so that it "auto-detects" the $MFT first sector.
As well the "excess" sectors at the end were reported (correctly) as "UNKNOWN", again an idea could be that of stopping the parsing when no more "FILE0" (or "FILE*") are found (every other sector).

jaclaz


   
ReplyQuote
joakims
(@joakims)
Estimable Member
Joined: 15 years ago
Posts: 224
Topic starter  

Thanks for the input jaclaz.

For now the tool assumes that you have the $MFT correctly extracted. But that said, the tool (mft2csv) is up for a major rewrite soon, and lots of stuff will be changed in it. Among other things are physical disk reading (ie no need for an extracted $MFT). And much more..

Regarding your input about axcess sectors with invalid records, I think it is better not to stop parsing. The reason is there may be invalid records like with the magic "BAAD", where healthy records may continue 1024 bytes further down. Also there may exist other sorts of bad sectors/records that will break a compleet decode if parsing was to be stopped.

Anyway, lots of work have been done lately and there will be much more to come. I am very grateful for all suggestions on how these tools should or could be. Or otherwise any tip on new features and functionality, or maybe a bug report. I'm open for any ideas.

The fact that it is all open source hopefully will tempt others to fiddle with the code and contribute. If so just let me know..

Right now MFTRCRD is much better at analysing, at least for individual files, where INDX records are resolved, dumped and decoded too.


   
ReplyQuote
 Ddan
(@ddan)
Eminent Member
Joined: 14 years ago
Posts: 42
 

Hi Joakim,
just had an occasion to try the nice mft2csv thingy.

I tested the thingy on a half-@§§edly extracted bunch of sectors that included a few sectors before the actual $MFT (and a few after it).
The program failed with an error.
So, I stripped the first few unrelated sectors and eveything went well.

jaclaz

You didn't say what sort of error it gave. One of the first things that the code does is to read the Boot sector to get things like the cluster size and the location of the $MFT. I assume your first sector would not have been a valid NTFS boot record. Would this explain the error?

Ddan


   
ReplyQuote
jaclaz
(@jaclaz)
Illustrious Member
Joined: 18 years ago
Posts: 5133
 

Would this explain the error?

Sure, as said I just extracted an "area" of a disk image (containing the $MFT) and fed it to mft2csv.
The error is/was

AutoIt Error
Line 4839 (File "<path of file>MFT2CSV.exe")
Error Variable used without being declared.

The use for which I tried to use the tool was "data recovery" (and NOT "Digital Forensics") oriented, I simply have NOT a valid bootsector.

Evidently the tool if first sector is the first sector of the $MFT "recognizes" it and it works allright.

I also tested it with a (valid) whole disk image (and it failed with the same error).
And I also tested on a (valid) volume image, extracted from the above (and it failed with the same error),
Then I extracted the actual $MFT from the above image and it worked flawlessly.
So I really cannot understand you ? .
Which tool are you talking about?
I tested MFT2CSV.exe, size 399545 dated 30-06-2012.

The tool (correctly) asks for a $MFT, I was perfectly aware that feeding it "something else" I would have probably got an error (though I prefer "aggressive" interfaces, like "You dumb@§§, I want a §@ç#ing $MFT, the file you gave me is not a $MFT!" wink a "Cannot decode file" would have been preferrable to the "Variable not declared" error).

The suggestion, that Joakim seems to have got perfectly was that it should be possible to "skip" everything until the first occurence of "FILE0" or "FILE*" (this is a "safety" measure for people that - like I did - feeded the tool with "something else", but actually it is a "no whining" one as many people when using a tool outside of it's intended scope and getting an error will start whining about the tool not working as it should).

The other suggestion, that it seems to me Joakim did not fully get was slightly different.
I have no problems whatsoever (normally) in finding and extracting a $MFT (even if it has errors/sectors overwritten).
Right now the tool behaves very correctly, once all "real" $MFT entries have finished (because I intentionsally fed it with a "larger" file), it continues scanning sectors, marking them as either "UNKNOWN" or "ZERO" entries.
The issue I see is that some form of limit should be given to this scanning, as IF an user feed them a really large file, the MFT2CSV will "scan forever" and produce a BIG .csv file (and there is currently no way, exception made for "killing" the process to stop the scan).

@joakim
I do understand the issue about (partially) overwritten $MFT.
Maybe a possibility would be to set a default of (say) "Scan max 100 sectors after last valid $MFT entry" and an editbox (or a .ini file) somewhere to chaneg this default value to (again say) 100, 10000, 100000.

Another few (very small) suggestions are about possible issues with imprting the data in a spreadsheet.
The first thing is that the actual separator for .csv files (unless what the name implies) is usually dependant on "local" (please read as "international" settings.
As an example on a normal Italian system the "list delimiter" is usually the semicolon ";".
Same goes for date separator, instead of the dash "-" and Italian system would have a "/".
A setting (again as a checkbox in the GUI or as a .ini entry) would be nice.
There is a further small issue.
AFAIK there is no way to have a $MFT Date entry such as (example)

2010-04-26 0326593640406

Recognized by a spreadsheet as "number".
The most you can do is use a format like (again example)
yyyy\-mm\-dd\ hh\mm\ss;@That will accept something like

2010-04-26 032659

as a "serial" and thus allow numerical operations (such as time differences) such as "=Q32-Q31" easily.
Of course it is trivial to insert a column with formulas *like*
=VALUE(LEFT(Q20;11))+VALUE(RIGHT(LEFT(Q20;19);8))
but I wonder if it would be a useful addition to have a setting for it, like either "ignore precision after seconds" or "make separate colum for thousandths, etc."
(these latters "ideas" are only a possible way to add some "convenience" to the use, they do not represent in any way "real" issues as anyone that know how to deal with spreadsheets and .csv files will manage them allright, whilst the "stop scanning if …" is IMHO a *needed* feature)

jaclaz


   
ReplyQuote
Page 5 / 7
Share: