Tools for scanning ...
 
Notifications
Clear all

Tools for scanning dd images / Finding an encrypted file

21 Posts
5 Users
0 Reactions
4,997 Views
(@gtbase)
Active Member
Joined: 8 years ago
Posts: 10
Topic starter  

the hidden volume would normally be inside a "container" that should have an identifiable header.

Where does it say that the container should have an identifiable header? I could not find any reference to it in the webpage you linked.
I have always been assuming that not even the container is marked by any header. I may be wrong, but I seem to remember reading this somewhere.


   
ReplyQuote
Passmark
(@passmark)
Reputable Member
Joined: 14 years ago
Posts: 376
 

There is a script here to calculate the Shannon Entropy of a file.
http//code.activestate.com/recipes/577476-shannon-entropy-calculation/#c3

Maybe you could modify it to instead do the same for 10MB blocks of raw disk data. Then load the result into a spread sheet. The big random parts of the disk should then be obvious and it should be a simple matter to find the exact start of the random block by visual inspection and carve it out.


   
ReplyQuote
jaclaz
(@jaclaz)
Illustrious Member
Joined: 18 years ago
Posts: 5133
 

the hidden volume would normally be inside a "container" that should have an identifiable header.

Where does it say that the container should have an identifiable header? I could not find any reference to it in the webpage you linked.
I have always been assuming that not even the container is marked by any header. I may be wrong, but I seem to remember reading this somewhere.

It is written right at the beginning of the page in a drawring where a green vertical line on the left side is tagged "Header of the Standard Volume".

I am not sure (not knowing what - the heck - you did) in which particular situation you are.

Normally the "hidden volume" is used as in the given link
https://veracrypt.codeplex.com/wikipage?title=Hidden%20Volume

for "plausible deniability".

The "hidden" volume normally resides within a "standard" Veracrypt volume, and the Standard Veracrypt volume does have a header AFAIK

https://veracrypt.codeplex.com/wikipage?title=VeraCrypt%20Volume%20Format%20Specification

https://www.veracrypt.fr/en/VeraCrypt%20Volume%20Format%20Specification.html

jaclaz


   
ReplyQuote
(@gtbase)
Active Member
Joined: 8 years ago
Posts: 10
Topic starter  

@ jaclaz
Yes, there is a header, but it's all encrypted! So, it does not help at all because there is no searchable string I can go by.


   
ReplyQuote
jaclaz
(@jaclaz)
Illustrious Member
Joined: 18 years ago
Posts: 5133
 

@ jaclaz
Yes, there is a header, but it's all encrypted! So, it does not help at all because there is no searchable string I can go by.

I see, there was some misunderstanding, my bad ( , sorry .

Somehow I thought you were attempting to find a hidden volume, not a whole Veracrypt container.

Anyway (unlike the "hidden" volume) the sectors immediately before the header are not necessarily "random".

If you start from a "brand new" hard disk, all sectors will normally be 00's.
Then it will little by little be filled by data.
Data tends to be in itself not "random" (even if not ASCII), while "random" data may well contain ASCII but not so much "readable" ASCII (Unicode does not count as it cannot be normally produced by "random" generators).

So there are four possibilities about the sector(s) just before the outer container header
1) one or more sectors are all 00's
2) one or more sectors belong to the filesystem structures (and are recognizable as such)[1]
3) one or more sectors are "data" possibly belonging to a file already "recovered"
4) one or more sectors are (for whatever reasons) already "random"

What I was suggesting was to make 00's of all the sectors identified as #2 (if possible) and as #3.
What remains (IF the original file was actually contiguous) should be easily identifiable.

Again, does the filesystem loaded in DMDE allow the creation of the "Cluster Map"?

jaclaz

[1] No idea if in the "unknown" filesystem you had such structures exist and if they can be recognised.


   
ReplyQuote
(@gtbase)
Active Member
Joined: 8 years ago
Posts: 10
Topic starter  

Good news and bad news. I found the exact starting point of my Veracrypt file, and I was able to decrypt successfully the standard (non-hidden) volume. As for the hidden volume, Veracrypt validates the password and tries to mount the volume, but Linux gives me an error message there is some problem in the filesystem, so it cannot be mounted. I already suspected that there was something amiss even before attempting to mount the volume, as I had noticed via entropy analysis that in the middle of the big encrypted data chunk there is totally unrelated non-encrypted text, lots of it. Obviously, the Veracrypt file was not stored in a contiguous block on the disk (Btrfs tends to get fragmented (.

I will now attempt to eliminate the unrelated part (probably one or more disk sectors) and splice the file together. Too bad I cannot see the original sectors of the Btrfs filesystem because the partition table got destroyed. I'll have to proceed by trial and error, I am afraid.

Anyway, THANK you very much to all of you guys for the terrific advice you gave me. It came in really, really handy.


   
ReplyQuote
(@gtbase)
Active Member
Joined: 8 years ago
Posts: 10
Topic starter  

Just in the spirit of 'sharing is caring', here is how I proceeded
1. I split the huge .dd file (300+GB) into more than 600 chucks, each about 500 MB
2. I analyzed the entropy of each of them – I used the freeware tool ent (the C source is also available) – this allowed me to pinpoint the contiguous chunks with very high entropy, i.e. 99.99%
3. I analyzed the entropy of the chunk immediately before the first high-entropy chunk, on the assumption that the start of the encrypted data was in the previous chunk. And sure enough, by analyzing the previous chunk (subdividing it into sub-chunks, then into sub-sub-chunks, iteratively – and always going back one chunk before the one indicated as the first of the high-entropy streak) I eventually reached the point where I could visually see the starting point of my file, which was preceded by null bytes.

I just though that this info may be useful to someone else in my situation…
Of course with better tools and a better skillset (which I do not have ( ) this whole process would have taken a fraction of the time it took me.


   
ReplyQuote
(@gtbase)
Active Member
Joined: 8 years ago
Posts: 10
Topic starter  

It turned out the file was heavily fragmented (8 nearly-contiguous segments). I was able to locate the beginning and ending hex strings of each segment by the entropy analysis + visual inspection combined method.

I need some more technical advice from you guys although I pinpointed the starting and ending strings of each segment, I am not totally sure about the exact offset of each. By this I mean that I established the cut-off points based on my visual inspection – often the random-looking data stream would abruptly end and a series of null bytes would begin, or a fragment of humanly readable text; in these cases I am pretty much confident that the cut-off point was selected correctly. But regarding a few cases I still have doubts.

Now, I was thinking that since the file was stored in Btrfs sectors, almost certainly the segments were cut-off at the beginning of a sector.
Is there a software tool (such as an Hex editor) that would allow me to set a simulated sector subdivision of the data and visually show me cut-off points based on this? This is the only way I could be reasonably sure that I am identifying correctly the exact end- and start- points for the segments.
Otherwise, if I am off even by only one byte, the whole thing would still remain unrecoverable (

Please advise, and thanks again


   
ReplyQuote
(@gtbase)
Active Member
Joined: 8 years ago
Posts: 10
Topic starter  

As an update to my previous post
1. As it happens, all the strings I had identified as markers of fragment start/end are already nicely aligned with the end/start of sectors, as indicated by the offset value in my hex editor (the second part of the value is always 000000). This is encouraging, because it suggests I have been doing it right (although I am by no means a forensic professional).
2. The overall task turns out to be much more demanding and time consuming than I expected, because there are many more file fragments than it first appeared. I can only detect them by close inspection via entropy analysis. I have to look really very close and re-iterate the analysis. Only then can I discover the random sector of unrelated data buried into the middle of my file. This is getting crazy. Potentially, there are dozens of such rogue sectors that I have to weed out, making reconstruction of the original file a gigantic task.
Is there not some software that allows you to reconstruct a file by consolidating all its fragments after automatically detecting and weeding out the unrelated data interspersed in there? I am beginning to think that finding such a software may be the only way for me to recover my encrypted volume. Doing it manually as I have been doing is proving really too much.

I would really appreciate some advice at this point.


   
ReplyQuote
Passmark
(@passmark)
Reputable Member
Joined: 14 years ago
Posts: 376
 

If the file is badly fragmented then your job gets a lot harder.

There are lots of other file types where the data is more of less random. Basically anything that is compressed or encrypted (or remnants of old files that were compressed or encrypted). e.g compressed video, DOCX files, XLSX files, Zip files, image files, packed executable files, installation packages for software, backups, etc…

If you had enough time to you could automated the search, stitching & testing process. Then leave it running for a week. Not an easy job however even for a professional, especially as the file is so large.


   
ReplyQuote
Page 2 / 3
Share: