As I understand it, both NTFS, the default file system of Windows, and ext4, the default file system of Linux, store the master file table in a centralized location at the beginning. On ext4, it is referred to as the inode table. The benefit of having everything at one location is seeking performance. Files can be searched through faster.
However, sources suggest NTFS additionaly stores a partial copy of the master file table (MFT) at a different location called $MFTMirr, as several sources claim:
- CGsecurity
- WhereIsMyData wordpress blog (linked from Wikipedia article about NTFS).
I have some questions regarding the MFT mirror, but I am putting them in a dedicated post to avoid mixing up the topics.
According to Wikipedia, the MFT mirror is:
|
Four entries only? If this is true, this would mean the MFT mirror is basically useless. I assumed it contains at least several levels of directories.
As I have read somewhere, NTFS also supports fragmentation of the master file table itself, should its initial space run out, meaning it can be resumed elsewhere on the disk, whereas the inode table of ext4 can not, making it prone to inode exhaustion. I can no longer find the source that says the $MFT of NTFS can be fragmented itself, but since the master file table appears to simply be a hidden file named $MFT in the file system's root directory, it seems plausible that the master file table itself can be fragmented. This would mean that a file system largely designed in the 1990s, NTFS, has two key benefits, MFT mirror and expandable file system table, over a file system designed in the mid-2000s, ext4.
FAT-based file systems like FAT32 and exFAT treat folders in the same way as files, so they are scattered around the block device and linked from their parent directory. This, linked listing, lowers seeking performance compared to a centralized file table, however, it makes the file system immune against inode exhaustion that ext4 and its predecessors ext3 and ext2 are prone to, and most importantly, shields the file system against a destroyed beginning, which can be caused by specifying a wrong output device inside the dd or ddrescue utilities or some random unexplainable failure as described in my earlier post.
Should the beginning of a FAT-based file system be destroyed, the bitmap that contains the fragmentation information would be gone, so fragmented files would need to be difficultly puzzled together. However, files' metadata, including sizes and starting LBAs (logical block addresses) would largely survive, since they are stored in the directory listings that are scattered around the disk. Also, in my experience, many memory card vendors set a high cluster size during factory formatting, like 128 KB. While making the storage of many small files inefficient, which memory cards are not expected to do anyway if used in digital cameras, it at least reduces fragmentation. Having to puzzle together 4 KB fragments would be a nightmare, and would even be hardly possible for anything non-human-readable, i.e. binary data.
Additionally, file deletion is more destructive on ext4 than on FAT and NTFS, as an article by the SANS institute suggests:
Clearing the extent means that we lose the physical block address of the first block as well as the length of the extent. In other words, there's no meta-data left in the inode that will help us recover the deleted file. This behavior is analogous to EXT3 clearing the block pointers in the inode when the inode is deallocated. Unfortunately, this means that we're forced to rely on traditional file-carving methods to recover deleted files, which makes life much more difficult.