Dear forensicfocus member,
I would like to ask for help on the subject regarding icat, foremost and $orphanfiles. Here goes….
1. I have an image (a flash disk with FAT filetype) that after using this command
fls -o 63 -r F image.001 | grep -i file_name
produced this output
-/r * 649873 $OrphanFiles/TAGIHAN.xls
r/r * 122506 $OrphanFiles/PT8D15~
-/r * 1212051 $OrphanFiles/TAGIHA~1.XLS
-/r * 1282702 $OrphanFiles/TAGIHA~1.XLS
-/r * 1374865 $OrphanFiles/TAGIHA~1.XLS
-/r * 1472145 $OrphanFiles/TAGIHA~1.XLS
-/r * 1519249 $OrphanFiles/TAGIHA~1.XLS
-/r * 1571469 $OrphanFiles/TAGIHA~1.XLS
after having that output, I tried to use icat to recover the last xls file as follows
icat -o 63 imagename.001 1571496 > TAGI~1.xls
but the result was a mess, not a readable xls file.
2. since using fls and icat comes to no avail, I resort to use foremost.
blkls image.001 > target.unalloc then foremost -v -T -i target.unalloc
the foremost output xls works perfectly, the problem is there is no name on the foremost output.
ps. I've tried extracting with TSK, but I belive TSK is also using icat to extract it's orphan files.
3. Questions
a. is there other way to extract these xls's along with it's filenames?
b. can you shed some light on these $orphanfiles?
according to my understanding, these files are the names left on the root directory, but without knowing the FAT's of these files (thus the address of the sectors used by these files), so the icat cannot recover it. Foremost on the other hand uses files signature and couldn't care less about the root directory (filenames).
Thank you very much for the help.
pps. sorry for my english.
To answer your question on the $orphanfiles
"
Orphan files are deleted files that still have file metadata in the file system, but that cannot be accessed from the root directory. In most file systems, the file metadata (such as times and which blocks are allocated to a file) are stored in a different location than the file name. The name points to the metadata location.It is possible for the name of a deleted file to be erased or reused, but the file metadata still exists. We call these Orphan Files because they have no parent (or at least the root directory is not its ultimate parent). "
Ref
wiki.sleuthkit.org/ind...phan_Files
So being these files are stored in FAT filesystem, the output of fls -o 63 -r F imagename.001 | grep -i file_name
-/r * 649873 $OrphanFiles/TAGIHAN.xls
r/r * 122506 $OrphanFiles/PT8D15~
-/r * 1212051 $OrphanFiles/TAGIHA~1.XLS
-/r * 1282702 $OrphanFiles/TAGIHA~1.XLS
-/r * 1374865 $OrphanFiles/TAGIHA~1.XLS
-/r * 1472145 $OrphanFiles/TAGIHA~1.XLS
-/r * 1519249 $OrphanFiles/TAGIHA~1.XLS
-/r * 1571469 $OrphanFiles/TAGIHA~1.XLS
that I provided on my first post are really files that still have directory entry, but no root directory. So, the icat command won't be able to recover it, because
icat -o 63 imagename.001 1571496 > TAGI~1.xls
1. icat will first read the directory entry of file TAGI~1.xls on 1571496
2. icat will find the address of the first cluster of file TAGI~1.xls,
3. but upon searching the next cluster of that file by reading FAT entries, icat won't find the next cluster address, because the FAT original entries is already being used by other entries or the disk have been formatted thus resulting in new (and empty FAT entries - unallocated entries).
meanwhile foremost won't start looking by reading the directory entries or FAT entries, instead, foremost will only search of start and end signature (among those unallocated space).
please do correct me if I am wrong, thank you in advance.
First of all, I'm sorry for bumping this thread,
second, I've done some homework and this is where it got me.
1. first, using fls, I've found some orphaned files
fls -o 63 -r F imagename.001 | grep -i file_name
-/r * 649873 $OrphanFiles/TAGIHAN.xls
r/r * 122506 $OrphanFiles/PT8D15~
-/r * 1212051 $OrphanFiles/TAGIHA~1.XLS
-/r * 1282702 $OrphanFiles/TAGIHA~1.XLS
-/r * 1374865 $OrphanFiles/TAGIHA~1.XLS
-/r * 1472145 $OrphanFiles/TAGIHA~1.XLS
-/r * 1519249 $OrphanFiles/TAGIHA~1.XLS
-/r * 1571469 $OrphanFiles/TAGIHA~1.XLS
2. then, using istat to see the metadata of the last file listed before (this is the part that I got wrong the last time)
istat -o 63 imagename 1571469
Directory Entry 1571469
Not Allocated
File Attributes File, Archive
Size 24064
Name TAGIHA~1.XLS
Directory Entry Times
Written Mon Aug 24 142616 2009
Accessed Tue Aug 7 000000 2012
Created Tue Aug 7 094058 2012
Sectors
20896 20897 20898 20899 20900 20901 20902 20903
20904 20905 20906 20907 20908 20909 20910 20911
20912 20913 20914 20915 20916 20917 20918 20919
20920 20921 20922 20923 20924 20925 20926 20927
20928 20929 20930 20931 20932 20933 20934 20935
20936 20937 20938 20939 20940 20941 20942 20943
it means that the directory entry still points to the FAT entries and in the end points to the sectors used by that file.
3. now I don't get how to recover the TAGIHA~1.XLS
I've tried using dddd if=imagefile of=outputfile bs=4096 skip=20896 count=6
and
blkls, but again to no avail.
Please correct me if I'm wrong, and give me the hint where to go from here. I really-really appriciate your help, thank you.
Have you taken into account the first 63 sectors of the disk when extracting?
Although it lists the sectors, I think they may be relative to the beginning of the volume, not the disk (don't use foremost too much so don't quote me on that!)
Try adding 63 to your skip count to account for the space before the Filesystem.
Have you taken into account the first 63 sectors of the disk when extracting?
Although it lists the sectors, I think they may be relative to the beginning of the volume, not the disk (don't use foremost too much so don't quote me on that!)
Try adding 63 to your skip count to account for the space before the Filesystem.
yup, I've tried that too, I've also tried using icat that worked with filesystem offset.
I've asked around in TSK mailing list and I think I've found some proggress.
I tried to find the FAT entries that point to the data unit that I want to recover. the data units in question from using istat -o63 imagenam 1571469, then I use ifind
ifind -o63 imagename.001 -d 20896
(and some other data unit in the same FAT entries)
none of them are being pointed by inode/FAT entries 1571469. So the most probable explanation is the data unit is already being used by other FAT entries.
after that I tried to search data unit that being carved by foremost, again, none of them are being pointed by the FAT entries of "tagihan.xls" earlier.
I've also tried looking at the metadata of the files being carved by foremost.
If anybody have another suggestion, I will be more than happy to oblige.
Thank you.
Let's see if we can understand the issue.
The file is 24064 bytes in size.
The device is 512 bytes/sector (evidently)
24064/512=47 sectors
istat lists 48 sectors, contiguous, starting from sector 20896 (which is fine as undoubtedly it lists the sectors corresponding to the clusters actually indexed in the FAT for that file, and the cluster size is unlikely to be 512 bytes).
For some reasons, you used dd with a block size of 4096 bytes (8 sectors, possibly cluster size) and you extracted 6 of such clusters.
The math till now is OK, so the only possible thing is that you got the offset wrong, you used this command which is seemingly wrong twice (if not thrice)
dd if=imagefile of=outputfile bs=4096 skip=20896 count=6
Since you set the block size to 4096, what you are skipping are 20896 of such sized blocks and NOT 512 bytes sectors (which is what seemingly istat output).
Additionally, you forgot about the 63 sectors before the beginning of the volume.
20896+63=20959
Trydd if=imagefile of=outputfile bs=512 skip=20959 count=48
The skip additionally assumes that the output of istat is about sectors counted starting from 0 (which may or may not be accurate).
jaclaz
jaclaz, thank you for your note,
this is the dd command that I run now
dd if=imagename.001 of=dd_output.xls bs=512 skip=20959 count=48
48+0 records in
48+0 records out
24576 bytes (25 kB) copied, 0.00039966 s, 61.5 MB/s
still Excel cannot read the output, more over, here is the icat command that I run (please correct me if I'm wrong)
ps. the md5sum of that file using autopsy is the same as using my icat command below.
icat -o63 -v imagename.001 1571469 > icat_output.xls
tsk_img_open Type 0 NumImg 1 Img1 imagename.001
Not an EWF file
fsopen Auto detection mode at offset 32256
raw_read byte offset 32256 len 65536
raw_read byte offset 97792 len 65536
raw_read byte offset 294400 len 65536
iso9660_open img_info 140321104 ftype 2048 test 1
iso_load_vol_desc Bad volume descriptor Magic number is not CD001
Trying RAW ISO9660 with 16-byte pre-block size
fs_prepost_read Mapped 32768 to 69904
iso_load_vol_desc Bad volume descriptor Magic number is not CD001
Trying RAW ISO9660 with 24-byte pre-block size
fs_prepost_read Mapped 32768 to 69912
iso_load_vol_desc Bad volume descriptor Magic number is not CD001
fatfs_inode_lookup reading sector 105856 for inode 1571469
raw_read byte offset 54230528 len 65536
raw_read byte offset 97280 len 65536
tsk_fs_file_walk Processing file 1571469
fatfs_make_data_run Processing deleted file 1571469 in recovery mode
raw_read byte offset 10731008 len 65536
and here are the md5sum of both of them
3a1ef7b320ee2d675ed631dbd4bc53c3 dd_output.xls
1f12a90ee22e0565be9be1f9a8673688 icat_output.xls
why do they have diffrent md5sum? aren't they suppose to be the same file? my problem remain though, I still can't read the dd output, nor found out if the file that foremost carved is the same as the output from fls -Frd from my first post.
any suggestion or correction will be most welcome. Thank you before.
Well, why don't you simply open the files in a hex editor/viewer?
As said from the info you provided the dd command could still be off by one sector.
BTW even if it was the right start sector address, the file you extracted with dd is 48 sectors whilst the original was seemingly 47, so the MD5 Sum would not match anyway.
There is still a further possibility (sorry but I am not familiar with istat output), that the istat numbers sectors in yet a different way (as an example by numbering only sectors belonging to the actual file system file area, bypassing the initial reserved sectors and those used by the FAT tables).
You can also well get from the "rightly extracted" file the Excel header (if I recall correctly different versions of Excel have a different header) and grep/cat (or whatever) the image for the header string, you should have a number of hits, and among them one will be the one of the "right" file to confirm the offset of it's start sector withing the image.
jaclaz