Issues with: Forens...
 
Notifications
Clear all

Issues with: Forensic Acquisition Of Solid State Drives

52 Posts
6 Users
0 Likes
2,887 Views
AmNe5iA
(@amne5ia)
Posts: 168
Estimable Member
Topic starter
 

Recently Forensic Focus published the following article.

Forensic Acquisition of Solid State Drives with Open Source Tools/

Much of this article flew in the face of what I understood as the issues regarding the TRIM command and garbage collection on SSD.

I understand the main issue as being that if files are deleted, the OS send the TRIM command to the SSD which can start the whole garbage collection mechanism. In the same way you can avoid files being actively deleted by pulling power to the drive, you can do the same with an SSD that has received the TRIM command but as soon as you power the SSD on again the garbage collection resumes, regardless of whether or not it is connected to a write blocker. You effectively get stuck in a situation where you believe the contents of the deleted files still remain on the SSD but the very process of plugging it in to image it, results in the contents of those files being purged. I assumed, when I opened this article that this would be the issue being dealt with. It is not.

The author seems to believe that the TRIM command will be sent/resent only when mounted. As far as i am aware simply mounting a filesystem on an SSD would not start the process of sending TRIM commands. In addition I was also under the impression that TRIM command,s could only be sent over certain connections. For example TRIM command can be sent via SATA connection but not by a USB connection.

As far as I am aware to resend TRIM commands you'd have to mount the filesystem (whilst enabling the 'discard' option) AND THEN also run a command like fstrim
Is that right?
And, if you had mounted the filesystem over a USB connection, the TRIM command wouldn't even reach the SSD.
Is that right?

In addition, it is my understanding that any ongoing garbage collection would not be prevented by not mounting the filesystem.

I'm not really sure what he has proved or achieved in this paper if my assumptions/beliefs are correct.

Also, in this paper he states that certain cables prevent reliably hashing the same SSDs (Section 7.3.1). They all prove reliable when they are used to hash a partition but not when hashing a volume. How do you hash a volume? You can hash a patrtion easily enough, 'md5sum /dev/sda1' but how has he hashed the volume? ' md5sum /mnt/sda1'???? I suspect that is the real reason the results are not reliable, not because of any specific cables.

Can someone please explain this all to me?

 
Posted : 14/03/2018 5:54 pm
thefuf
(@thefuf)
Posts: 262
Reputable Member
 

I understand the main issue as being that if files are deleted, the OS send the TRIM command to the SSD which can start the whole garbage collection mechanism.

First of all, the term garbage collection can have two different meanings
1. an act of changing the data in a discarded logical block when this block, which was previously reported by the operating system, is processed by the drive;
2. an act of walking through file system structures in order to locate and (later) discard unused logical blocks performed by the firmware of the drive.

In the same way you can avoid files being actively deleted by pulling power to the drive, you can do the same with an SSD that has received the TRIM command but as soon as you power the SSD on again the garbage collection resumes, regardless of whether or not it is connected to a write blocker. You effectively get stuck in a situation where you believe the contents of the deleted files still remain on the SSD but the very process of plugging it in to image it, results in the contents of those files being purged.

In general, yes. However, a forensic examiner can try to put a drive to the techno mode to prevent discarded blocks from being erased (Techno Mode – The Fastest Way To Access Digital Evidence On Damaged SSDs).

As far as i am aware simply mounting a filesystem on an SSD would not start the process of sending TRIM commands.

There are two ways to issue discard commands in Linux
1. as soon as data is marked as unallocated ("Continuous TRIM");
2. in a batch ("Periodic TRIM").

So, yes, mounting a file system (by itself) doesn't result in discard commands being sent to the drive.

In addition I was also under the impression that TRIM command,s could only be sent over certain connections. For example TRIM command can be sent via SATA connection but not by a USB connection.

In theory, it could be possible to issue the SCSI UNMAP over a USB connection. But this will likely fail.

A random report

It confirmed that "deletenotify" worked on my JMicron JMS567 SATA/USB bridge, which supports UASP and UNMAP->TRIM translation. Since LBPME bit on the bridge is set to 0, so apparently Windows does not check the bit before issue UNMAP commands. Not sure about the requirement on the two VPDs though.

(source)

As far as I am aware to resend TRIM commands you'd have to mount the filesystem (whilst enabling the 'discard' option) AND THEN also run a command like fstrim
Is that right?

Yes (if we speak about resending the commands). However, the fstrim command may be executed from an anacron job. Some live forensic distributions had this anacron job enabled (thus, leaving a file system mounted for a long time may result in the free space being discarded).

And, if you had mounted the filesystem over a USB connection, the TRIM command wouldn't even reach the SSD.
Is that right?

In most cases, yes.

In addition, it is my understanding that any ongoing garbage collection would not be prevented by not mounting the filesystem.

Yes.

I'm not really sure what he has proved or achieved in this paper if my assumptions/beliefs are correct.

Well, the author isn't aware of many things. For example, the Tableau TD3 device is mounting file systems present on source drives, so, according to the author, this should result in data being changed on a source drive.

 
Posted : 14/03/2018 10:12 pm
jaclaz
(@jaclaz)
Posts: 5135
Illustrious Member
 

There are a few things I am not sure/convinced about.

TRIM is an OS feature.
If the issue is that a trim command may be issued accidentally, wouldn't make it sense to use an old, non-trim enabled OS (like - say - a BartPE/PE1.0, or a Linux with kernel before 2.6.33) ?

But I believe it is possible even with later Linux to build a non-trim enabled kernel (or *whatever* sub-system trim belongs to).

Since once the data has been copied in an image it is "safe" form the effects of trim commands, this could be a very minimal OS, only needed in the acquisition phase to perform a dd (or similar), much like the OFSClone by Passmark
https://www.osforensics.com/tools/create-disk-images.html

Besides, I could find no trace in the paper of the usage of a writeblocker (in the sense of make/model).

As AmNe5iA noted this

It is important to note that despite the change in the hash values generated from the disk’s volume, the hash value generated from the disk’s partition will always match regardless of the adapter used.

needs some clarifications, I cannot understand what the Author meant, and - AFAICU - "standard practice" is to image the whole PhysicalDrive and never "partitions" or "volumes" so I wonder what is the relevance of this - if confirmed/replicable.

As well, using a forensic live CD and/or having automount disabled is already (again AFAIK) "standard practice", so I am failing to see in what the "proposed method" represents an innovation. ?

The point #3 maybe?

Decide on what adapter and/or cable to use and take note of brand and model. The same and only adapter should be used to verify and image an SSD.

But the recommendation in itself doesn't seem adequate to solve the (reported but as said non entirely clear) issue about the adapter/converter (not cable) altering the hash.
I mean, say (hypotherically) that you have Adapter #1 and Adapter#2.
You either verify that Adapter #1 always makes a correct hash and that Adapter #2 always alters it.
Then the remedy is "only use Adapter #1 and throw the Adapter #2 in the garbage".

What am I missing? ?

jaclaz

P.S. And we are again on the apodictic

† Forensic Live CDs have write-protection rules [13] to prevent changes from occurring to connected devices, but a hardware write-blocker must be used when performing data acquisition.

 
Posted : 15/03/2018 9:27 am
thefuf
(@thefuf)
Posts: 262
Reputable Member
 

Besides, I could find no trace in the paper of the usage of a writeblocker (in the sense of make/model).

Table 4. Also, a previous version of this table included the "Tableau TD3 forensic imager" (it's gone now, but there is a cached page in Google).

 
Posted : 15/03/2018 9:56 am
jaclaz
(@jaclaz)
Posts: 5135
Illustrious Member
 

Besides, I could find no trace in the paper of the usage of a writeblocker (in the sense of make/model).

Table 4. Also, a previous version of this table included the "Tableau TD3 forensic imager" (it's gone now, but there is a cached page in Google).

Ahh, thanks, I found it now, it is a .png

T35es

The image in the cache is still available

TD3
T35es

?

jaclaz

 
Posted : 15/03/2018 10:05 am
Jefferreira
(@jefferreira)
Posts: 19
Active Member
 

As the author of the article I am glad that this is happening.

The only thing I can say and will say is that you should do the experiments, prove me wrong. I am okay with it. )

If you have a spare pc or laptop, plug an SSD inside, boot a live cd such as Ubuntu from a usb, format the ssd, create a partition, populate the partition with data, delete some data, unmounted the ssd, Hash it, take note of the hash, turn computer, plug a forensic live cd, turn pc/laptop on, boot from live cd, wait has long as much as you like and then hash the ssd and check if the hashes match.????

I could sit here all day debating garbage collection and trim, but enough papers and articles have been written on those issues over the last decade and the only thing I want to add is that Bell and Boddington wrote in their paper that we should give up trying to find a solution to the problem and that is not very optimistic, is it? ????

The TD3 should not have been in the post and that is why I asked scar to remove it. The TD3 was a failed experiment.

The article was written because the results from the experiments showed that it's indeed possible to image SSDs without losing any traces of potential digital evidence, otherwise I would not have spent months confirming the results, repeating the same experiments and taken the time to write the paper… The article is a paper.

PS the issue with the adapters is not an SSD problem, it aldo happens with HDDs. I thought about sharing it with everyone so that the paper wouldn't be just about the auto mount.

Thank you

 
Posted : 15/03/2018 12:30 pm
AmNe5iA
(@amne5ia)
Posts: 168
Estimable Member
Topic starter
 

Yes but all you have really proved is that storing an SSD in a cupboard for 30 days has no effect on the data.

 
Posted : 15/03/2018 12:51 pm
Jefferreira
(@jefferreira)
Posts: 19
Active Member
 

Yes but all you have really proved is that storing an SSD in a cupboard for 30 days has no effect on the data.

Isn't that what ww are trying to do? Treat SSDs in a similar way we do with HDDs? Trying to preserve its integrity? The method is not full proof as stated in the conclusion, but it offers a solution to the problem the volatile data stays static and you can recover it.

Unless I am missing something here, the purpose is to preserve integrity to ensure that we can recover as much potential digital evidence aa possible, right?

 
Posted : 15/03/2018 12:56 pm
thefuf
(@thefuf)
Posts: 262
Reputable Member
 

If you have a spare pc or laptop, plug an SSD inside, boot a live cd such as Ubuntu from a usb, format the ssd, create a partition, populate the partition with data, delete some data, unmounted the ssd, Hash it, take note of the hash, turn computer, plug a forensic live cd, turn pc/laptop on, boot from live cd, wait has long as much as you like and then hash the ssd and check if the hashes match.????

If an SSD implements the deterministic read after Trim feature, then the hash will likely be the same. An SSD with the non-deterministic Trim or with the filesystem-aware firmware is a different issue.

 
Posted : 15/03/2018 1:00 pm
Jefferreira
(@jefferreira)
Posts: 19
Active Member
 

The only thing I can say is that I am just sharing the findings of my experiments, the results show that it works.

You have a list of the equipment I used in one of the tables.

I tried mounted v unmounted and the truth is that it worked when unmounted. The SSDs were hashed then imaged 3 times with each one of the adapters, we are talking about hours, (thanks to the write-blocker with a usb 2.0 connector.) waiting to confirm the results. Repeated repeatedly and the results were consistent.

This post, these comments and observations were expected and obvious and, besides asking that you try the method, there is nothing else I can do. Is it possible that I may be missing something? Possibly. Is it possible that it does not work with all SSDs? Possibly, but I tried to get different types of SSDs and with the SSDs that I used, it worked.

 
Posted : 15/03/2018 1:12 pm
jaclaz
(@jaclaz)
Posts: 5135
Illustrious Member
 

The only thing I can say is that I am just sharing the findings of my experiments, the results show that it works.

Well, I still have the questions I asked unanswered, please don't take this in any way as "adversarial" ) , I only want to understand
1) Can you detail the differences between "volume" and "partition" and share the EXACT commands you used to image the one (and the other)? (and why you did these experiments since normally the whole device or "PhysicalDrive" is imaged)

2) Can you better explain the idea behind (or around) the "take note of the adapter used and only use that one"?

3) What do you see particularly as "novelty" in your findings? Or if you prefer, which specific steps/recommendations in your paper are different from common, current, "best practice standards?

jaclaz

 
Posted : 15/03/2018 2:05 pm
Jefferreira
(@jefferreira)
Posts: 19
Active Member
 

Jaclaz, I will do my best to explain it.

1) Can you detail the differences between "volume" and "partition" and share the EXACT commands you used to image the one (and the other)? (and why you did these experiments since normally the whole device or "PhysicalDrive" is imaged)

After using the shasum - a 256 /dev/sdb to hash the drive, I used dd to image it dd if=/dev/sdb of=ssd1_first_img. 001and changed the img name to second, third and so one for a month

Hashed the image and compared the hash values.

I regards to the partition v volume. Honestly, it was the only way I could think of to differentiate them.

/dev/sdb volume and dev/sdb1 partition. Because of the changes on the hashes generated by the adapter I got curious if the hash values between dev/sdb would also differ. They did, the dev/sdb changed depending on the adapter while the /dev/sdb1 remained unaltered. And that is why I used the internal SATA connector to be used has a baseline for the "correct hash"

I only image drives as whole too.

2) Can you better explain the idea behind (or around) the "take note of the adapter used and only use that one"?

The idea is to ensure that the results(hashes) are consistent. Has shown in the results of the experiments, the hash values changed and that is why I wrote what I wrote in the recommendations to validte the cables and adapters to ensure that they all generate the same hash value.

3) What do you see particularly as "novelty" in your findings? Or if you prefer, which specific steps/recommendations in your paper are different from common, current, "best practice standards?

From my understanding and knowledge currently you either let the ssd stabilise by plugging it with no regard to what is inside or you (after plugging it) hash individual files… My question is why?

I am suggesting a method that is straightforward, not intrusive and that is similar to the way you handle and image HDDs.

Jaclaz, you are really smart and knowledgeable and I would like to say thank you for taking the time to read the article, post and your questions. If there is anything I am doing wrong or missing, please let me know ????

jaclaz

 
Posted : 15/03/2018 2:26 pm
jaclaz
(@jaclaz)
Posts: 5135
Illustrious Member
 

1) Ok, we will need to find a better way to define things or agree to a same convention.

The matter is slippery and a lot of people tend to use improperly this or that confusing term, disk, disk drive, disk, drive, volume.

sdb is a "disk drive" "whole disk-like device", or "device" or (in windows) "PhysicalDrive" or "disk" or if you prefer it is the object composed by ALL sectors of the mass storage device from offset 0 to the very last sector.

sdb1 is a "partition", if you prefer an object that is a given subset of the entire capacity of the device, that by definition starts at an offset higher than 0 and (normally but not always) also ends before the very last sector, on windows this is called a "drive" because when mounted it gets assigned a "drive letter" and is BTW the same object to which you can apply a "volume label".

If a partition is primary it is EXACTLY the same as a "volume" [1].

If a partition is an Extended one it contains "logical volumes", which are not anything different from a "primary partition"as above, the only difference being that their addresses is not stored in the MBR, but rather in the chain of EPBR's.

On GPT all partitions are primary, so all partitions are also volumes, with the only caveat in [1]

2) About the hashing, if I get it right, you did in two separate steps
1) hash the source before dding it
2) hash the resulting image after having dded it
Is that correct?

3) This is where I am really failing to understand, what you recommend seems to me what is ALREADY recommended by everyone [2] before and besides your article.
The only item with a *difference* (AFAICT) is about the (combined) effects of points 2. and 7. of your "recommendations"

2. To ensure the integrity of digital evidence and avoid unexpected results, the cables and adapters used at the forensic laboratory should be verified and validated, prior to the imaging process, to ensure that all the cables produce the same hash values.

7. The SATA adapter used at the crime scene to image or hash the Solid State Drive should be the same adapter used at the forensic laboratory for hashing and imaging the SSD. This helps to determine if any changes to the device occurred.

If a given adapter/converter is validated, it is validated, and all validated adapters/converters will not introduce hash changes of any kind.
The alternative being that the given adapter/converter cannot be validated (and should be thrown in the garbage bin) because it unpredictably changes hashes OR assume that a same (anyway somehow "defective") adapter/converter always changes hashes in exactly the same manner on the same data.

I am asking because very likely I am missing something of relevance in your recommendations.

jaclaz

[1] With most filesystem, (primary) partition=volume=filesystem, the exception is NTFS, where there is a single sector (which is used to store the backup of the first sector of the bootsector or of the $Boot file) that is after the end of the volume but inside the partition.

[2]personally I go even a little further sideways, but that is my personal approach to the possible issues, rigorously not validated, let alone accepted by anyone in the forensic community at large, we may talk of this once cleared the other points

 
Posted : 15/03/2018 3:08 pm
Jefferreira
(@jefferreira)
Posts: 19
Active Member
 

2) About the hashing, if I get it right, you did in two separate steps
1) hash the source before dding it
2) hash the resulting image after having dded it
Is that correct?
"

Thank yo ufor clarifying I do misuse volume to refer to a device sometimes. P

I did four separate steps

1) hashed the SSD
2) Created image of the SSD with dd
3) hashed the image
4) hashed SSD again to confirm if the integrity had been compromised.

I did this for every SSD.

with the auto-mount disabled, I just followed the steps above,

In regards to the recommendations, I wrote them based on the feedback I got from some people.

I did what seemed logical. If the validation is common pratice then my recommendations are useless.
It is like I previously said, I only added the issue with the adapters because I didn't want the paper to be just about the auto-mount and a lot of people I know were unaware that this issue with the adapters happens.

The adapters are fine btw, they are just different. why? I did not have a chance to find out,

Was it already common pratice? I was unaware of it.

Ando no, I don't think you are missing the relevance of anything I wrote. )

Thank you

 
Posted : 15/03/2018 3:37 pm
jaclaz
(@jaclaz)
Posts: 5135
Illustrious Member
 

I did four separate steps

1) hashed the SSD
2) Created image of the SSD with dd
3) hashed the image
4) hashed SSD again to confirm if the integrity had been compromised.

I did this for every SSD.

with the auto-mount disabled, I just followed the steps above,

Good. )

Then - maybe - there is still the possibility of a read error of some kind (i.e. if you prefer a malfunctioning of either the source device or of the adapter/converter) or of a write error on the target device/media or even *something else* (unspecified).

Some of the imaging tools (besides plain dd) do create the hash while imaging, not that this is particularly better but at least guarantees that the generated hash is that of what is read from the source at the moment of the imaging.

Fact is that anyway hashing is (or has become, doesn't matter) a very poor way to detect possible issues (i.e. it works just fine if everything is fine, but if something fails, the mismatched hash foesn't give any hint about what has failed).

JFYI (and as a side note) there was a proposal to do "better" hashing, check this
https://www.forensicfocus.com/Forums/viewtopic/p=6587736/#6587736
and given links, particularly this one
https://www.forensicfocus.com/Forums/viewtopic/t=11739/

jaclaz

 
Posted : 15/03/2018 4:09 pm
Page 1 / 4
Share:
Share to...