What are the most r...
 
Notifications
Clear all

What are the most robust file systems?

4 Posts
3 Users
1 Likes
2,632 Views
(@exerceo)
Posts: 12
Eminent Member
Topic starter
 

I am considering create a new Linux installation on an external SSD formatted with NTFS instead of ext4. To my knowledge, NTFS stores folders (i.e. list of files) both in the master file table ($MFT) and as separate folder scattered around the disk. What suggests this is that IsoBuster lists folders' LBA (logical block addresses) with a number that is outside the range of LBAs that $MFT occupies. In comparison, ext4 stores its inode table with all file metadata at a vulnerable location at the beginning, so I am looking for an alternative file system.

Various reports and Wikipedia also suggest that ZFS is robust. I have not examined the features of ZFS yet, however, the Linux installer does not list ZFS in the file system picker.

To your knowledge, which file systems have as many of the following features as possible?

  • Redundant copies of the file index (also known as file table) and fragmentation bitmap, ideally at both the beginning and end of the partition.
  • In addition to the centralized file index, individual directories with meta data about contained items should be scattered around the disc like in FAT and exFAT.
  • Individual files' entries store the location of at least the first few fragments (like in NTFS), in addition to the block bitmap (known from FAT/exFAT).
  • To protect from deletion accidents, deletion of files and directories should only mark items' entries as deleted without immediately nullifying meta data such as file names, sizes, attributes such as time stamps, and on-disk locations (starting cluster number). This is done correctly by FAT/exFAT. A counter-example is ext4, which nullifies the inode containing the metadata (see Understanding EXT4 (Part 1): Extents ).
  • Ideally, each file should have surrounding sectors with metadata about the file itself. At least every fragmented file. A sector after each fragment should link to the next fragment, or perhaps as many fragment cluster numbers as fit in one sector for redundancy.
 
Posted : 26/12/2022 5:08 pm
(@exerceo)
Posts: 12
Eminent Member
Topic starter
 

Since the edit window has elapsed, I put it here: I am considering to install Linux on NTFS on the solid state drive not only because it appears to have a more robust structure than ext4, but as well because I can manage files from any Windows computer, since Windows obviously supports NTFS, whereas ext4 can only be accessed on Windows through third-party tools.

If file systems other than NTFS or ext4 have substantial resilience benefits, I might consider using one of those, but currently, NTFS is my most likely choice.

 
Posted : 26/12/2022 6:27 pm
mokosiy
(@mokosiy)
Posts: 54
Trusted Member
 

If robustness is more important than transfer speed, I'd recommend looking at more modern Btrfs. It provides reliability due to the copy-on-write feature, built-in checksums, and snapshots.

Our company has been using Btrfs-formatted internal SSD in Atola TaskForce since 2018. It stores all cases and reports data, and must flawlessly work even when someone powers off the system. We have had no single issue with it so far.

Another reason to use Btrfs is Synology NAS devices. When you set them up, you may notice Synology DSM suggests Btrfs as a single alternative to ext4. It makes us think the file system is robust enough.

 
Posted : 30/12/2022 9:47 am
bootrom reacted
(@c-r-s)
Posts: 170
Estimable Member
 

I agree that Btrfs is probably the best choice, followed by ZFS and Ext4. NTFS is not in my top-three list, because - on a single external SSD - the contribution of the file system features to overall robustness is neglible. On the contrary, to use a file system from a different OS family in production and even consider alternating writable mounts on such systems, is far from robust storage handling.

 
Posted : 01/01/2023 1:40 pm
Share: