Hello
After a lot of study and Googling, I have decided to register in this forum for help.
I wish to recover a text file (.rtf file).
There was a write error when I saved this file for the last time, and when I restarted Windows, this file appeared as zero sized, and could not open the contents anymore.
When I look into the MFT entry, using NTFS Walker, it says that there are no pointers to the dataruns.
It seems that these pointers have been overwritten.
But when I look at the raw MFT entry, I see a lot of data at the end of the entry, that could be some kind of backup of the old pointers, maybe? Or maybe the pointers themselves that were not overwritten?
I need some expert to take a look at the MFT entry (I post the raw copy of it), and tell me
if it is possible to recover the pointers or not.
Recovering these pointers would allow me to quickly restore the whole file.
Otherwise, I would have to carve the whole disk, looking for fragments.
and this .rtf file is very fragmented, as it was edited several times during a 1 month period,
and many of the fragments belong to old edits.
Also, I wonder if simply running CHKDSK would recover automatically the "lost dataruns", or "lost clusters" (rendering the typical FOUND.xxx folder, containing FILExxxx.CHK files). I have not run CHKDSK yet as precaution.
Here I post the raw MFT entry
thanks in advance!
How do you know the file is fragmented? How was it generated, RTF normally come from a WP package and will be written in one pass, so may not be fragmented
Two things to try
1) Search for any backups of the file, or he main file that created the backup.
2) Data carving and try and work out from content if it is the one you want.
The MFT, in my opinion is no use to you
If it is fragmented - good luck and remember to take backups next time you edit a file more than 30 times.
Shadow copies? is it Windows vista+ (or server 2003) then
Otherwise could you post a binary hex dump - can look through your hex display manually, but dont have much time and don't fancy doing it by hand/eye
How do you know the file is fragmented? How was it generated, RTF normally come from a WP package and will be written in one pass, so may not be fragmented
Two things to try
1) Search for any backups of the file, or he main file that created the backup.
2) Data carving and try and work out from content if it is the one you want.
The MFT, in my opinion is no use to you
If it is fragmented - good luck and remember to take backups next time you edit a file more than 30 times.
Yes, it is very fragmented, I have carved out more than 50 fragments of text (dataruns).
The file was generated using Windows XP's WordPad, and the disk was almost full, that is why the file is so fragmented. Many of the fragments belong to old edits of the file, so there is a lot of redundance, and a lot of mess. Also, the file contains several pictures, and each picture is also fragmented. Pictures span across many clusters, and they mainly contain "ffffff", and for me it is impossible to carve out these images.
Fortunately, I did a copy a few days before the crash, so most of the info is safe.
Re. 1), where can I search for backups of the file? Is there any backup created by Windows XP? I searched for temp files in Windows system directories , but I couldn't find anything. Also, this happened in September last year, so I think Windows possible backups are gone by now. I have preserved intact only the hard disk containing the file. The system disk has being used.
Shadow copies? is it Windows vista+ (or server 2003) then
Reconnoitre may do the job. Otherwise could you post a binary hex dump - can look through your hex display manually, but dont have much time and don't fancy doing it by hand/eye
I have Windows XP, SP3. I don't know if these shadow copies exist.
Okay, I am going to try to save the MTF entry into a bin file. I will be back.
edit Okay, I have the .bin file ready. It is 1Kbyte long.
How can I upload it? I don't see any option to attach files in the edit panel.
How big is the file?
50 fragments sounds a very large number even for a full disk.
If you have a previous edit, yout best bet may be to go back and work on that copy, and give up on recovery
When processing fragments, false matches are often the worst problem and you need to have good routines to match each cluster boundary and make sure they come from the same file / edit. With many very similar files, you have a problem. However, with a sample file, you may beable to find strings that aer now on a cluster boundary and check with the original file to see if they are correct.
On the subject of so manuy fragments is the issue of defragmenting a disk. Many people will say it is a waste of time, and will not help performance. For 99.9% of disks, you will never spot any improvement in speed, but an occasional defrag will help tidy up such files, and maybe give longer runs of free space for future edits.
How big is the file?
50 fragments sounds a very large number even for a full disk.
If you have a previous edit, yout best bet may be to go back and work on that copy, and give up on recovery
When processing fragments, false matches are often the worst problem and you need to have good routines to match each cluster boundary and make sure they come from the same file / edit. With many very similar files, you have a problem. However, with a sample file, you may beable to find strings that aer now on a cluster boundary and check with the original file to see if they are correct.
On the subject of so manuy fragments is the issue of defragmenting a disk. Many people will say it is a waste of time, and will not help performance. For 99.9% of disks, you will never spot any improvement in speed, but an occasional defrag will help tidy up such files, and maybe give longer runs of free space for future edits.
The file's data, is more than 10 Mbyte long. Most of it is due to the pictures, of course.
Yep, I have a copy of a previous edit, and I have carved out the new fragments, so I think I have it all, but for me it would be reassuring to recover the file elegantly, using the pointers that were contained in the MFT before the crash/froze. When Windows froze, the file's MFT was updated, and the size was set to zero. But I wonder if the pointers are still at the end of the MFT-entry.
How can I attach files?
The file's data, is more than 10 Mbyte long. Most of it is due to the pictures, of course.
Yep, I have a copy of a previous edit, and I have carved out the new fragments, so I think I have it all, but for me it would be reassuring to recover the file elegantly, using the pointers that were contained in the MFT before the crash/froze. When Windows froze, the file's MFT was updated, and the size was set to zero. But I wonder if the pointers are still at the end of the MFT-entry.
You can try using DMDE
http//softdm.com/
feature of mapping clusters.
Another approach is "negative logic".
Make an image of the disk.
On the image, start deleting (and wiping), as an example with sdelete, each and every file.
What remains (and is not 00) should be only "previously deleted files" or "remainders not anymore indexed".
You will still be searching for needles, but maybe not anymore in a haystack, but inside a box. wink
jaclaz
You can try using DMDE
http//softdm.com/
feature of mapping clusters.Another approach is "negative logic".
Make an image of the disk.
On the image, start deleting (and wiping), as an example with sdelete, each and every file.
What remains (and is not 00) should be only "previously deleted files" or "remainders not anymore indexed".
You will still be searching for needles, but maybe not anymore in a haystack, but inside a box. winkjaclaz
Okay, I am looking at DMDE's Cluster Map function.
According to the instructions, first I have to do "Update Cluster Map". Is this safe?
And what is the result of this? Does it say which file does a cluster belong to? So, if I locate a lost cluster, not indexed, (orphan), does this tell me the original file it belongs to? How is this possible? How can DMDE know this?
The negative logic approach is also interesting. the hard disk is 160GB, and 99% is used. So with this method, the search is reduced to 1%, or 1.6GB.
A combination of both approaches would be even better )
update I have tried DMDE's Cluster Map function, but I don't understand the symbolism of the Cluster Map (sort of =====[]<>====).
Now I go to sleep, I am feeling a bit sick.
Either you have a lot of spare time on your hands or this has now become some sort of Moby d**k "white whale" which you are determined to capture.