I implemented streams, seems to be working ok.
Got it. Implementing it now.
I think I'll have a beta version ready tomorrow.
Happy to provide some input on this thread, but having read through it I get the distinct impression that a lot of problems were caused by not doing fixup. Is it possible for you to summarise what problems still exist?
Ddan
For now I don't seem to have any problems anymore.
Your Fixup explanation was a great find and certainly reason for a number of issues I was seeing (impossible runs, corrupted file names etc.)
I'm still seeing one weird thing. An AD Stream with a strange big byte size but that's part of my newest code. So maybe I still need to fix a bug there. Going to dive into this one today.
Other than that I think I'm set and I think I will release a beta version tonight.
I'll post here when I release the beta. Always great to get some feedback from experts, should you have the time !
PS. on the error I just mentioned ( a large stream ) …
All seems to be OK as far as I can see.
The stream is truly this big then ?
To check streams I use this very nice free little tool
http//
However, the tool doesn't show system files in the root ($MFT etc.)
So I can't use it to verify this one.
Turns out the big stream is $Bad and belongs to $BadClus and … get this … is the size of the partition.
Is it possible that there is a $Bad stream attached to $BadClus that spans the whole partition and basically maps over all files and folders ?
Is this how it is supposed to be ?
If so then that is something I need take in account somehow because if people 'extract' a full drive's content they will copy the content twice then.
Actually … still something wrong … the start address of the file is not the start of the partition … weird …
Sorry for posting this much … I'll dig deeper first now.
But if anybody has some good intel on $Bad and how it is supposed to be (e.g. check on your system) that would be great !
never seen $bad that is not the size of the partition - really invest in carriers book. It is very good.
I bought the book … I got a notification today that it was dispatched. It's still going to take a few days.
>> never seen $bad that is not the size of the partition
Aha … so it *is* supposed to be the size of the partition !? That's what I was asking, if it's normal if it is the size of the partition.
And forget about the start address issue … the stream is sparse … that's the reason. I'm now implementing displaying "sparse" more obvious to avoid this issue.
No more bugs/unknowns in my code then … whoohoo (as far as I know) -)
As others have pointed out, the $Bad file should equate to the size of the volume. Obvious really since it is designed to show which clusters are good and which are bad, note that it shows both. I have come across one little gotcha in this though and that relates to a partition that has been resized, from 60gb to 80gb for example. The $Bad tends not to be updated to represent the 80gb. Maybe some resizing software does the job properly?
Also analysing the data run tends to become a little awkward when the run contains bad clusters, which is more than likely in a data recovery scenario.
Here's an example taken from a 60gb laptop drive running XP-SP3 which crashed with a major filesystem problem. The data run for $Bad was
04 28 26 81 00
41 01 28 26 81 00
01 4D
11 01 4E
03 1F 45 3E
31 01 20 45 3E
03 F1 23 20
00
As I'm sure you can see, this cannot be interpreted as a normal data run. What it does say is that there are 3 bad clusters and the rest are ok. The good cluster are the sparse lines (1, 3, 5 and 7), the bad clusters are the run lines (2, 4 and 6). If you add together the good clusters and the 3 bad ones, you should get &HDF8F88, which was the number of clusters on the drive. The WinHex image for the drive confirms unreadable sectors in the bad clusters.
Ddan
Interesting Ddan !
Meanwhile Carrier's book arrived (mere minutes ago).
I'm certainly going to read the relevant sections once the software has been rerleased, to fix any possible issues I haven't thought of, before the next/final release. For now it seems that what I have is working nicely, although I would like to get some confirmation on the so called negative runs.
Yesterday I could not release the Beta version because I realized I had forgotten to renew my Certificate (doh). I expect a phone call from Comodo today to verify that I am 'me' and hopefully I can sign tonight and release the beta version.
In my software you can list the extents per file, and it would be interesting to confirm if they are correct for files with known negative runs.
I also believe IsoBuster will be a great addition to your tools set. It already does UDF and HFS (and FAT) on HD and flash/usb media etc.
So you might want to check it out.
The beta will not be perfect. I still need to do some coding for Mac resource forks (the MacBinary conversion stuff) and I also don't find deleted files yet. I had expected them to be available like in FAT, but my testing shows that they are removed from the INDX records.
So I need to go read the MFT records that are not in use to find missing files (I guess).
People who have helped me (Dan, Joakim, Paul) send me a personal email and I'll gladly cut you a [Business] type license to the soft.
I'll post here again when the beta has been released. It will depend on the certificate.