Sections of Unalloc...
 
Notifications
Clear all

Sections of Unallocated Space Filled with 0xFF

10 Posts
6 Users
0 Reactions
2,940 Views
(@laura4458)
Active Member
Joined: 14 years ago
Posts: 17
Topic starter  

What explanations are there for sections of unallocated space being filled with 0xFF's? (I'm interested, too, in what your initial thoughts/reactions are on this without any other information.)

I noticed it while examining the hard drive with FTK. My initial thought was that some sort of wiping had been done on the hard drive. I also began to wonder if there could be something wrong with the image (a Raw(dd) image created with FTK Imager 3.1.2.0). So, to remove that factor from the equation, I attached the original/suspect drive (using hardware write blockers, of course) and mounted the drive with FTK Imager. I scrolled through blocks of unallocated space in the hex view, and I see the same things as I do when I'm looking at the image numerous blocks of unallocated space filled with 0xFF. So, I think I've ruled out a problem with the image.

I'm estimating that around 75 GB of the 400 GB hard drive is filled with 0xFF. There are still unallocated blocks of data that have data, and FTK did data carve files from from those blocks.

Another anomaly on the same drive The operating system is a Windows 7 Home Premium, and about the last 80% of the pagefile.sys and hiberfil.sys files are each filled with 0xFF. I haven't completed the examination of the hard drive, so there may be other anomalies that exist.

Is there something obvious that I'm missing? What explanations are there for what I'm seeing? Any idea how to figure out why some blocks of unallocated data are filled with 0xFF and others aren't? What else do I need to look for to find a cause? If it is an indication that a wiping program was used, what software are you aware of that could easily explain this data (especially if you've seen the same thing during an examination), and any ideas for how I could determine if that software was used?

If there is more information that you need that would help determine possible explanations for this, please let me know. I'll do everything I can to try to fill-in those information gaps.

I appreciate any help/ideas/suggestions that you could give me. Thanks!


   
Quote
(@trewmte)
Noble Member
Joined: 19 years ago
Posts: 1877
 

laura4458

Here is one method of using 0xFF to fill a file with bytes

http//www.noah.org/wiki/Dd_-_Destroyer_of_Disks

You can use /dev/zero and `tr` to generate and fill a file with any given byte constant. This creates a 10MB file filled with ones as a bit pattern (0b11111111, 0377, 255, or 0xff).

dd if=/dev/zero bs=1M count=10 | tr '\0' '\377' > test_data.bin

It is one thing to see methods like this posted on the internet, it is entirely another thing whether they work and/or totlally remove traces, remnants etc. So maybe you may wish check for yourself.

Interesting article here

http//cs.harvard.edu/malan/publications/pet06.pdf


   
ReplyQuote
(@Anonymous 6593)
Guest
Joined: 17 years ago
Posts: 1158
 

What explanations are there for sections of unallocated space being filled with 0xFF's?

The simplest is probably use of one of those tools that wipe free disk space. Which one … if you look at the description of SDelete (from Sysinternals) you'll find their particular approach documented – at least at the first glance it seem likely to leave traces in the file system.

Of course, there are usually places where those utilities don't reach … say, outside the area covered by a volume. If those places are 0xFF-wiped as well, it clearly won't do without some additional explanations.

A more complex one starts from a hard disk that has been wiped completely with 0xFF, on top of which OS has been installed and usual file usage taken place. This kind of approach will produce 0xFF also in normally unreachable places. If you have files with ValidData length set, you may have allocated clusters that are noy really part of the file yet. (You could see these on XP, but I think later versions of NTFS may not allocate them.) If you do, and those clusters are also 0xFF-filled, it would fit. However, the longer this kind of system is used, the more clusters will be overwritten, so it can't be applied everywhere.

about the last 80% of the pagefile.sys and hiberfil.sys files are each filled with 0xFF.

How much primary memory in the system? Has it been altered recently? (Just a strange idea what happens with an existing page file if you add extra memory? Does the page file get resized?)

Just as a safety precaution – if you haven't run a memory tester on the system, try to do so. You don't want any kind of hardware glitch to surprise you.


   
ReplyQuote
EricZimmerman
(@ericzimmerman)
Estimable Member
Joined: 13 years ago
Posts: 222
 

i havent verified this with the latest dban, but based on this

http//sourceforge.net/p/dban/feature-requests/78/

it looks like this was at one time a requested option.

dban would allow for wiping every sector of a disk with a pattern and then, as others have said, installing windows would punch holes in all the 0xFFs


   
ReplyQuote
(@laura4458)
Active Member
Joined: 14 years ago
Posts: 17
Topic starter  

So, it sounds like there are definitely programs that will wipe sections of the freespace as well as the the pagefile and hiberfil. I'm going to be looking for any remnants of those programs you've mentioned, and I appreciate that. The ongoing discussion regarding totally erasing a pc before reinstalling a system coincides nicely with what I'm working on now.

How much primary memory in the system? Has it been altered recently? (Just a strange idea what happens with an existing page file if you add extra memory? Does the page file get resized?)

That's a great thought, athulin, and thanks for that idea. I don't know if the memory in the system has been altered, something for me to look at. Allegedly the subject's "tech guy" hasn't done any work on the laptop or the hard drive I'm examining, but that doesn't mean the memory wasn't altered. It looks like the OS was upgraded about two years after the original (new) purchase of the laptop, so it's definitely not as it was when it was purchased.

With it being clear that there are wiping programs that will write 0xFF to the pagefile and hiberfil and unallocated space, I still have a couple of issues to address.

1. Is it possible to determine when the wiping occurred? When I examine the unallocated space, there are sections where FTK has carved files. For carved OLE files, FTK has generated a created and modified time. I realize that there are dangers in assuming the accuracy of these times for files carved in the unallocated space. (Comments? Warnings?) But there are also files where the time is within the content of the file itself. For example, there's a carved HTML that includes car sale listings with the date/time the user posted the ad within the ad itself. Obviously that specific file could not have been deleted before the actual Internet used car listing was posted, but I don't think that means that the unallocated space could not have been wiped after that date, right? The subject could have wiped the unallocated space and then deleted his Internet history/cache (which would have been generated before the unallocated space wiping), right? So, is there any way to pin down when the wiping occurred?

2. So, there are programs that will enable a user to wipe unallocated space and pagefile and hiberfil data. Is there any explanation (other than the possibility that athulin mentioned above that I still need to check out) in which the wiping occurred by some default process or as a byproduct result of some other non-related process (such as altering the memory possibility issue athulin mentioned)? Or, can we definitely say that the user intentionally wiped these areas of the hard drive?


   
ReplyQuote
Bulldawg
(@bulldawg)
Estimable Member
Joined: 13 years ago
Posts: 190
 

Is the drive an SSD? I have not seen this in practice yet, but it's my understanding that issuing an ATA secure erase command to an SSD will release all the electrons in all the NAND cells, which looks to the file system like 0xFF. This may also be true with new drives, depending on the manufacturer. If you then format the drive with the /q option (quick format) then the drive will still have a whole lot of 0xFF in unallocated and in the unpartitioned space. Also, you may see 0xFF in the pagefile.sys and hyberfil.sys depending on how long the computer has been in service and how much memory it has compared to the usage patterns.


   
ReplyQuote
(@laura4458)
Active Member
Joined: 14 years ago
Posts: 17
Topic starter  

Is the drive an SSD?

The hard drive is a 400 GB Toshiba MK4058GSX.

Here is the spec sheet.


   
ReplyQuote
(@laura4458)
Active Member
Joined: 14 years ago
Posts: 17
Topic starter  

Also, you may see 0xFF in the pagefile.sys and hyberfil.sys depending on how long the computer has been in service and how much memory it has compared to the usage patterns.

Has anyone else seen this with these two files? I've examined a fair number of hard drives, and I haven't seen this before. And as far as the "amount of memory compared to the usage patterns", why would these files be filled with 0xFF instead of 0x00?


   
ReplyQuote
Bulldawg
(@bulldawg)
Estimable Member
Joined: 13 years ago
Posts: 190
 

That was pure speculation that requires some testing. If a drive is filled with 0xFF and a pagefile is created, does Windows fill the pagefile with data before using it, or are those bytes left as is until it is needed to page memory? Same question with the hyberfil.

Just a quick look at one of my examination machines–the pagefile and hiberfil are mostly filled with 0x00, which is what the unallocated clusters are filled with. The pagefile and hyberfil don't get much use, so the 0x00 is likely left over from the original data on the drive. I doubt Windows fills either file with 0x00 when they are created. To test this, wipe a drive with 0xFF or some other pattern and install Windows. What do the pagefile and hyberfil look like before you use them? What about after some light use?


   
ReplyQuote
jaclaz
(@jaclaz)
Illustrious Member
Joined: 18 years ago
Posts: 5133
 

This may depend on the settings for pagefile (hyberfil size should only be affected by RAM amount).

The default setting on Windows NT based systems is something like "Let system manage the pagefile".

This creates a "dynamic size allocation" file.

Let's say that the machine has 1Gb of RAM, the pagefile (if the "let system manage" is enable) will probably be set to values like 1 Gb - 1.5 Gb or 1 Gb - 2 G b.
Basically the minimum size is the amount of RAM and the maximum size is between 1.5 and 2.0 times the minimum.

But let's imagine that the pagefile size is manually set to 512-512 Mb.
Then the wipe of the disk (actually writing of 0xFF instead of 0x00's) is performed.
Then the setting is reverted back to "let the system manage it" or, for the sake of simplicity, changed to "static" 1.5Gb-1.5Gb.
The space on disk allocated will be 1.5 Gb (or if you prefer the existing 512 Mb in size pagefile.sys file will be "enlarged" to 1.5 Gb in size), which will probably be initially made of 512 Mb of the "old" pagefile.sys + 1 Gb of 0xFF's.

Then when the laptop is run/used, since in normal operation (and with "relevant" amounts of RAM, depending on the OS, but on XP, as an example, 1 Gb means usually "a lot") the pagefile may be hit not in excess of the 512 Mb or, say, never beyond the 1.0 Gb.

At the end what you have is a largely unused pagefile, of which the last 512 Mb are the contents of the disk at the time the pagefile itself was "enlarged/set at current size", i.e. all 0xFF's.

I would say not common, but entirely possible.

Now, about hyberfil.
Let's imagine that the *whatever* program that wrote the 0xFF's to the hard disk used the strategy of creating a "huge" set of 0xFF's in memory and then writing the contents of this memory chunk to the hard disk (or let's imagine that to verify these writes it loads in RAM "huge" chunks of disk sectors.
If the machine is hybernated at this point, hyberfil.sys will contain a "huge" number of 0xFF's alright.
As well, if the disk has been filled with 0xFF after the hyberfil.sys has been deleted, when the file is next created the first time, most probably only the part of memory which is in use is written to the file, and the rest "remains" 0xFF.

jaclaz


   
ReplyQuote
Share: