Some file systems (e.g. FAT) offer little protection against accidental or deliberate changes. For example, a skilled user with appropriate access can directly modify on-disk file system metadata to change timestamps, truncate or delete a file or similar. FAT offers no protection against this and, if done well, there is little chance it could be detected.
NTFS, on the other hand, is a much more complicated file system and is designed to recover in the event of file system corruption. The primary mechanism to do this is via the journal in $LOGFILE. This makes it much harder for a skilled user to make direct file system changes that would (i) stick and (ii) not be rolled back and (iii) remain undetected.
Please, can anyone suggest any (even difficult/theoretical) techniques that a user could attempt to do attempt direct NTFS modifications without $LOGFILE rolling back and what countermeasures we could use to detect it?
Jim
Â
Â
Retracted -- possible misunderstanding.
I'm not sure i follow the reasoning here. Surely any modification done directly to the filesystem bypasses the journalling mechnism of the file sytem. You say that you can directly change the metadata for a file in the FAT filesystem. One of your examples is chaging the timestamp. If I can change the timestamp in the FAT filesystem using a hex editor then I could just as easily change the timestamp of a file in NTFS. I just need to edit the MFT entry of that file using a hex editor again. That chage won't be logged in the journal so can't be deteceted or rolled back. I think the journal is designed to prevent file system corruption caused by accidents, say a HDD losing power unexpectadly whislt writing changes to the disk. It isn't designed to detect alterations made directly to the filesystem data through direct user/program manipulation.
To clarify: NTFS is a very complicated file system and contains a linked fields and integrity mechanisms. Consequently, non-trivial changes to NTFS may be detected / rolled back by the journal unless done very skilfully. It is, of course, possible to modify the journal too but this is considerably more difficult than with a non-journaled file system like FAT.
Â
I was hoping the get some comments (or even a debate) on:
Â
1. What types of (trivial) NTFS changes could be made without the journal being involved?
2. Could the user prevent journal rollback by tampering with the journal somehow? For instance, could the user simply truncate the journal?
3. Could the user go even further and hide tampering with the journal?
4. If the user was skilled enough to tamper with the journal, what other countermeasures could we use to detect this type of direct file system modification (and catch the user)? e.g. $UsnJrnl
Â
I am currently researching this area so I am interested in any potential technique, regardless of difficulty.Â
Â
What do you mean by 'non-trivial changes'? Please define.
A small change to NTFS metadata (like one timestamp) may go undetected. I would call this a trivial change.
However, a more ambitious change (like changing a file size, adding/removing extents, changing file security etc) would most likely require other collateral changes elsewhere. If the user didn't fully understand this, the change could be picked up in the next file system consistency check.
Similarly, if the user was just "unlucky", even a small change could interfere with the USA mechanism. If they didn't understand this, it would quickly be found by the file system and repaired. In other words, because NTFS is complicated, a very high level of skill would be needed to make a non-trivial change without detection.
I'm trying to figure out what types of changes could be made and what counter measures could be used to detect them. In particular, I'm researching if $LOGFILE could be truncated to bypass some consistency checks and if I can detect this type exploit.
Â
Â
@jimc You could just as easily use a hex editor to to blank an MFT entry, zero out the corresponding file and the corresponding enties in the $Bitmap file. Again this would completely bypass the journal feature of NTFS. Is that still trivial? Even if the user only deleted the MFT entry and didn't bother deleting the actual file or clearing the $Bitmap entries, you'd never be able to tell who did that, when they did it, what the file was called, when it was saved to the filesystem, when it was 'deleted' etc. I'm not sure why you think NTFS is immune to tampering because it is 'complex'. I'm pretty sure you could tamper with any part of the NTFS without it being detectable as long as you knew what you were doing.
Write operations going directly to the disk goes "undetected" by the mechanics of ntfs. But only up until the point where you have messed it up enough that it has considered a rollback necessary. Related to this topic, I made this tool a few years ago; https://github.com/jschicht/PowerMft The concept is to manipulate nfts metadata by performing such direct write operations. If the modification causes ntfs to detect some sort of bad corruption, it will attempt to repair it with its self healing capabilities, which I believe would be impossible/pointless to prevent. For a good modification you would be at risk of having the modification reset, if it occurred for example soon before such a rollback occurred (caused by some other corruption). The question of making the mod bypass such rollback, would require you to intercept at a lower level than what "simple" user mode actions allow. Don't know exactly what would be required but certainly a kernel mode kernel driver. For a good modification after the time window of the risk for a reset (see $LogFile internals), it would simply stay on disk, obviously.
But beyond the actual rollback issue, there may be artefacts from lots of places that can be used to identify such a manipulation, ranging from uc, slack, memory, vsc etc etc, and of course the presence and artefacts of the tool itself. Lots of sources for potentially detecting it.
As I see it for the detection, it boils down to either;
1) Handling of all these locations where artefacts can be found.
2) An implementation of some logging of such low level/direct disk writes (like an AV logging in a similar fashion to $LogFile).
3) Other backup mechanism.
One stupid tool made for proving the point is setmace, which focus on timestamps. You can't necessarily prevent its operation since it's simply using the exposed winapi, that you can't block. But a possibility could be to block certain direct write operations by implementing a filter that implements some sort of reserved area (for example the ntfs meta files). Then you can always look hard enough and find the artefacts. Or you could rely on a separate logging and backup solution to complement.