How are folks handling error messages that a file could not be accessed because it was in use by another process?
Using FTK Imager to do a Logical (or Custom Content) AD image of network shares. If the file is in use, the process continues and looking at the Image Summary the warning/error shows up.
Looking at the image after creation, the filename is there, along with the proper size at that time, but when you try to export it, it is empty.
- Do you go back and copy that file out separately?
- Is there a flag in FTK imager to force the copy?
Thoughts? Thanx.
-=Art=-
And another question that was posed
On a live system why would you verify the hash of the image during imaging? It doubles acquisition time. One can always spottest the image to make sure you see the files and folders and then verify offsite.
I've always verified during imaging - will continue to do so when I am not onsite but wondering what others thoughts are. It does speed up the process )
Any thoughts on that?
To verify that the image is solid and working. The hash is one issue, that the image completed and is able to be used and not corrupted is another reason.
Good point!
Would opening the image again in FTK Imager and checking to see if the folder structure and files are there and spot checking some of the files, show that the image is able to be used?
Is there anyone who "has a friend of a friend" wink that does not verify the image on-site and what justification is used for that?
Have there been problems that anyone has encountered later?
(in this "we want everything yesterday" world, I'm just trying to come up with ideas on how to best do onsite work quickly. Sometimes you just can't hurry technology…. D )
Art,
I have known people who have done 2 different things, neither of which I agree with, but one was forced, the other out of just not thinking things through.
Case 1 Attorney places a limited time frame on the examiner you have 5 hours to do this. Going off the old image at 1GB a min and verify at close to that, there should have been no problems. Mid image it started to slow down and while the image completed and it was said to be in it's verifying stage the other side said times up and you have to go he had to shut down. He got back to the office and started up the image and it was corrupted.
Case 2 Image was done and in Encase the bar in the lower right corner hadn't made the transition to giving a solid time for the verifying (it has a minute or 2 before allowing an approximate time to completion) It was unplugged before the transition was made and even just X out of it as opposed to cancel OK (task manager canceled) and that image was corrupted.
These are extreme circumstances, and you might be hard pressed to duplicate, but they happen, a search of forums shows that you get corrupted images from all kinds of different things.
Instead of focusing on what TIM, Encase, or FTK tell you for imaging times if you are going through a WB stick with 1GB a minute and if you finish early they can be pleasantly surprised, but you probably wont finish in over the allotted time.
Verify on-site if possible. If not, make sure your client understands and accepts the implications of not doing so.
As a rule I always try to verify onsite but I have been asked not to do so either by the people I am working for or the client. Most of the time the reason is one of money, they don't want to pay me to sit and verify the images. I explain the issues with not verifying onsite and as long as I get confirmation that this is what they want I leave and do the verification when I can. But before I go I will at least open the image and make sure that I can access files.
I have had to return to recollect once out of twenty-five or so collections when I was not allowed to verify onsite. Total cost to the client for redoing the images, travel and expenses was way more than the cost of the time to verify. No one was really happy but I had the confirmations and the client paid.
Over all the images I have acquired I can remember two maybe three that completed successfully and later failed to verify. Those odds are bad enough for me to want to verify onsite just to be safe.
Case 1 Attorney places a limited time frame on the examiner you have 5 hours to do this. […]
Case 2 Image was done and in Encase the bar in the lower right corner hadn't made the transition to giving a solid time for the verifying […]
Both these sound as if the image format was not suited to the task I assume that traditional image formats were used. (Not dd – you can't corrupt a dd image – but something with an image header or footer that wasn't in synch with data.)
For live imaging, something more like a backup format is needed backups must be useable even (or perhaps especially) if there is a system crash during the job. (Backup software designed for tape storage usually get this right automatically, due to the nature of the medium.) Something like a tar file format, tweaked for the metadata of the actual file system imaged seems reasonable.
That probably means that the image format must cover
- a file listing (just the file tree, perhaps along with manual exclusions)
- a priority schedule (where do we start – imaging C\Windows is probably less important than a C\Users if there's time pressure)
- a file by file image format that probably should provides for segmentation (to allow for multiprocessing, when possible).
- some ability to reimage, perhaps incrementally/differentially, based on previous work
The process would then be to image the files, and once all files have been gone through, repeated to check if the files that couldn't be imaged the first time, can be imaged now. Don't see any reason to go on beyond that.
And, of course, cancelling/aborting an image would be designed to close things down nicely, with appropriate indicators in the image, not just abandoning things.
Altogether, it sounds very much like backup software to me, except for the priority list. I can see that it would be possible to do the full list, the exclusion, and the segmentation in a separate program, creating one or more backup lists for multiple backup instance to chew on.
Has anyone tried using backup software for this kind of job?
Good points, all. Great info. )
I always try to verify onsite. As a previous thread explains - you are looked on as the "Expert" (hate that term - lol) so you get to make the procedures, however, should there be deviation from it - as ClownBoy suggests - I get it in writing. If this is a phone conversation, I generally send a follow up email saying something like "To recap our conversation… yada yada yada…."
The request to non-verify generally comes from those that know the process. I do not separate imaging and verification when I explain the process. I say that "imaging" is "acquisition and verification of the image". If they know of the procedures, and request that I don't verify onsite… then the back n forth begins )
As paranoid as I am, I still spot-check every image to see if I can open it in imager and see the files, regardless of verification - takes only a few minutes.
Back to my other question on this thread
How do people handle files that are in use by users during a live image?
Example User A has FileX, FileY and FileZ open during a live image of their home directory on the server.
- FTK Imager says that the file could not be processed because it was in use.
- I have been individually copying those files to a local drive folder and then imaging that folder again.
How are others handling this?
Thanx all.
Sometimes you have to use different techniques and document what you did and why.
A program like unlockme unlocker wholockme etc will free up the process and should have no effect on the actual system times except to stop a process to free up the item for what you described.
Never had a case where it was tossed out or I was reprimanded for using this method.