I'm currently using eSata as my destination drive when I only have one eSata port (field use) and eSata in and out on my desktops.
What I would like to see is someone come up with a thunderbolt to dual esata connections which can be used for in and out. That would be sweet )
Anyone seen any thunderbolt accessories yet? Other than the thunderbold external drives?
-=A=-
Greetings,
I use eSATA for my destination drive whenever possible. I tested some acquisition tools awhile ago and the results are here
http//
integriography.wordpress.com/2010/11/18/testing-acquisition-tools/ In several tests, I imaged a 160GB drive in about 35 minutes. The gating factor here isn't the destination drive interface.
I'm looking forward to trying Thunderbolt for my destination drives. The gating factor at that point will be the source drive.
-David
It's definitely interesting that even with a powerful box like that, the compression still over doubles the acquisition time.
Something i'd like to then test is processing time on the resulting images. Uncompressed versus fully compressed. Have to make a note to remind myself to test that if I ever get a chance. Be interesting to see how long a hash/sig/keyword search on the uncompressed image takes versus the compressed one.
Sadly, I do not have handy, reproducible data available like David, and the last time I checked was a few years ago. That said, it's my experience that using "Fast" compression will be faster than no compression or good compression. This is also the conventional wisdom within the "Big Data" systems community, and forensics should not be exceptional in this regard.
The reason for this has to do with the delicate balance between CPU and disk bandwidth. There are a certain number of instructions a CPU can execute while waiting for data to be retrieved from disk. Without compression, the CPU is just spinning its thumbs. With good compression, the CPU exceeds the instructions it can perform during this window, with the result that the disk is then idle.
Fast compression is like Goldilocks just right. Because it uses an otherwise idle CPU, there's no big cost. And because it compresses the data somewhat, there's also a net benefit on transfer to and from disk.
If I _had_ to choose between no compression or best compression, I would choose no compression–good compression is often so CPU intensive that things slow down by an order of magnitude or two (compared to the sustained transfer rate of the disk). But, fortunately, fast compression means we don't have to make this bad choice.
Jon
Thanks Jon, interesting to hear. My personal preference has to always use FTK Level 1 compression or EnCase's Good if using that (without a large set of my own benchmarks as reference) largely for the reasons you describe, on the basis that
The difference in size between the lowest possible level of compression (even if it just compressed all the zeroed areas) and the maximum compression would be of little significance (as large files like videos/mp3s etc would likely be nearly optimally compressed anyway). Plus the simpler the compression algorithm in theory the faster it should be to both create the image and then decompress on the fly for processing. So it sounds like what you describe means that's been a sensible option, and perhaps only consider the non-compressed option for time-critical on-site jobs, and then potentially re-image with low compression when back at the lab if necessary.
Rich
Sadly, I do not have handy, reproducible data available like David, and the last time I checked was a few years ago. That said, it's my experience that using "Fast" compression will be faster than no compression or good compression. This is also the conventional wisdom within the "Big Data" systems community, and forensics should not be exceptional in this regard.
The reason for this has to do with the delicate balance between CPU and disk bandwidth. There are a certain number of instructions a CPU can execute while waiting for data to be retrieved from disk. Without compression, the CPU is just spinning its thumbs. With good compression, the CPU exceeds the instructions it can perform during this window, with the result that the disk is then idle.
Fast compression is like Goldilocks just right. Because it uses an otherwise idle CPU, there's no big cost. And because it compresses the data somewhat, there's also a net benefit on transfer to and from disk.
If I _had_ to choose between no compression or best compression, I would choose no compression–good compression is often so CPU intensive that things slow down by an order of magnitude or two (compared to the sustained transfer rate of the disk). But, fortunately, fast compression means we don't have to make this bad choice.
Jon
It depends. If the disk you are imaging is full of already compressed data (zip files, MP3s, movie files, etc) then trying to compress it further is a waste of time, and certainly will not be faster than imaging with no compression.
One advantage of imaging with no compression is that you always have a good idea of how long it will take, as in my experience imaging the same sized disk with no compression, using the same imaging technique takes a very similar amount of time, regardless of disk content. This can then be accurately fed through to time estimates and quotes for the client. Everyone's happy.
The best compression I'm aware of is adaptive compression, which compresses that data which is compressible but knows not to waste time trying to compress data that's not possible to compress. The only provider of adaptive compression as far as I know is X-Ways Forensics.
(including MD5 and SHA-256 hashing).
Why would you use both?
It depends. If the disk you are imaging is full of already compressed data (zip files, MP3s, movie files, etc) then trying to compress it further is a waste of time, and certainly will not be faster than imaging with no compression.
The variance should be pretty small with fast compression. In the worst case, zlib will put uncompressible data inline, with 5 bytes of overhead for the block. So, there should not be much cost and performance should be predictable.
The best compression I'm aware of is adaptive compression, which compresses that data which is compressible but knows not to waste time trying to compress data that's not possible to compress. The only provider of adaptive compression as far as I know is X-Ways Forensics.
Anything that uses zlib should have this property (http//
It's kind of a pity that none of the popular evidence file formats support LZO (http//
If you're comfortable using dd, it's always possible to pipe data through lzo.
Jon
It's kind of a pity that none of the popular evidence file formats support LZO (http//
www.oberhumer.com/opensource/lzo/)
Indeed, i've tried to make this point (not LZO specifically - but the very fast/light compression) on the EnCase boards, never got any traction though. Would seem like an "easy-win" for everyone though (hell you could even sell it to the corporate guidance types by pointing out that it would make their product appear to faster performance wise by giving people that option and making it the default - with a tooltip explanation why - next to the acquisition box of the benefits of using it).
I suspect they're rather busy with fixing EnCase7 right now though p