I received this from Microsoft this morning while trying to see if our company can use a WinPE/WinFE boot solution.
"I know some of our tools such as DaRT runs on WinPE. DaRT is a standalone toolset available to customer as part of MDOP. It’s a more full recovery environment. If you want to run Win PE for other general purposes, I don’t think we license in that manner anymore."
Well, it doesn't sound like a reply from an "expert" in this.
The point raised earlier is not about "use" or "run", it is about re-distribution of the binaries.
The allowed use of the PE (if built from the ADK) may however be different from that of self-created PE (built from install files for which a corresponding "full" OS license exists).
jaclaz
To clear up the questions
We don't give WinFE away, we advise anyone using the software to make their own distribution. For users who have assisted us with trials, we have provided the WinFE environment for their use until they build their own. We are currently working on a version that will work under a Linux boot environment, however until this is ready, customers are advised to build a WinFE disc using their own Windows licenses.
As I understand it, the previous user asked for a copy of our iso to compare to the one which they use. Ballistic will work under your own WinFE build, just ensure you have USB 3.0 drivers, and drivers for any express adapter you wish to use included in the image.
Ballistic is software supplied on -
Some form of collection Hardware - (drives), we supply the software on one drive along with associated cables and connectors you need (as you need to add other drives yourself), you can buy a full set of collectors ( 4 drives), giving 5 in total.
You can, (although no-one has yet), buy a licencing platform which is all software and no hardware from MCMS. This allows you to licence ANY drive with the software for a time period of your choosing.
The fastest the software has run is 503mb/sec or 30gb/min. All dependent on several factors - ports, machine age / power, hard disk.
Jaclaz, happy to send you a brochure , drop me an inbox.
I would investigate the entire imaging process with this new device. The data transfer rate is only one part of the equation in total acquisition time.
How are you hashing your drives when complete?
If you are running MD5 and SHA-1 and SHA-256 —you could cut your time considerably by just hashing SHA-1
Use a good destination drive as well - I like WD VelociRaptors
To clear up the questions
Thanks ) , so we are back to the previous (rough) definition, of
So, all in all it is something "comparable" to FTK imager, only much faster, right?
The fastest the software has run is 503mb/sec or 30gb/min. All dependent on several factors - ports, machine age / power, hard disk.
Very good ) , but - with all due respect - obvious 😯 (that it depends on ports/machine/age/power/hard disks, and I would also add "quality of cables" as I have seen here and there reports of generic issues both with SATA and USB cables *somehow* defective).
Cannot say how much it is doable (legally), and/or if you can actually do it, but I would (personally) appreciate a "comparative" test with a same "declared" hardware of the tool against one (or more) "common" tools.
Even if done (instead of against any of the Commercial tools - to avoid any possible legal issues) against a simple freeware tool (admittedly on the "slow" side of "imaging tools") such as the DSFOK toolkit or one among the various dd ports to windows, it would IMHO give a feeling of the speed increase obtainable on one's own hardware.
Example (completely faked data)
Machine "x", make/model "y", OS Windows "n" (or WinPE "n"), RAM, etc., imaging a 500 Gb disk (make/model) to a hard disk 1 Gb (make/model) connected through BUS "z"
dsfo time 512.33
ballistic time 035.14
This would give (still IMHO) a more practical feeling of the increased speed of the thingy.
Jaclaz, happy to send you a brochure , drop me an inbox.
That would be very kind of you, though I would suggest you to instead publish it (or a reduced version of it with the main points, should there be in it something under NdA or similar).
jaclaz
The posted imaging speeds are really good. So good that it begs to be seen in a test for comparison (that is a good thing). I bet Eric Zimmerman would gladly accept a demo of the tool to add to his extensive imaging tests.
My concern for the imaging process as I understand it, is that the image is spread out across several storage devices. If there is parity, no problem. If no parity, then the chance of a hardware failure increases with each additional device (or forgetting to bring back an external USB you plugged in the back of the machine…).
If 1TB were imaged across 3 or 5 devices, the segmented(?)/striped(?) parts of the image need to be reconstructed on one storage device later at the shop. Doing this onsite would add to the time and defeat the purpose of the imaging speed increase. And discovering at the shop that you left part of the system plugged in the suspect/custodian computer would require going back in.
all due respect if you can leave anything behind……..this is designed to cut the most precious to all - TIME (especially now you can get 6tb drives) If you are in situations where time is of the essence then this is for you. I have heard horror stories of leaving vital evidence behind because current systems are too slow to capture the required image. Our system will beat everything. I conducted a demo last week and the client did not believe the speeds on my demo laptop. He pulled out a tower PC and plugged it in. The linux machine was booted and 320GB imaged in 70 mins, no drive removed, (SHA 1). Needless to say placed an order.
I have heard horror stories of leaving vital evidence behind because current systems are too slow to capture the required image.
Which is fine, of course, as I have also heard them ) , but also many selling stories *like*
Our system will beat everything. I conducted a demo last week and the client did not believe the speeds on my demo laptop. He pulled out a tower PC and plugged it in. The linux machine was booted and 320GB imaged in 70 mins, no drive removed, (SHA 1). Needless to say placed an order.
jaclaz
Lots of negatives on here. We have a new capability to tackle increased data sizes. Inbox me if people are genuine and would like to move with the times. Cheers.
Lots of negatives on here. We have a new capability to tackle increased data sizes. Inbox me if people are genuine and would like to move with the times. Cheers.
Nothing "negative", only trying to separate the wheat from the chaff.
Not that doubting about people being genuine or hinting they may be not able or wanting to move with the times is particularly "positive thinking" BTW.
I don't think that the request of providing some more complete info on the hardware involved (as opposed to "a tower PC" or "my demo laptop" or "the linux machine") is asking that much, but of course you are perfectly free to not provide them ) .
Again, I have no doubt whatever about the tool being a nice one and about it being very fast, as you say, I would only like to understand how much faster it is when compared with other tools.
jaclaz
I would like to see some further supporting information too.
As I see it there are three potential points of bottleneck when imaging 1) the speed of the source drive, 2) the throughput of the imaging system, 3) the speed of the destination drive(s).
If 1 is your bottleneck then 2 and 3 become irrelevant
if 2 is your bottleneck then 3 is irrelevant
obviously part of the picture is how 1 is connected to 2 and 2 to 3- but if you are imaging an IDE/sata drive then this is defined for us, in the case of this imaging equipment the conenction to the destination drives is flexible and multiple.
3 only becomes the bottleneck when you can suck data off 1 and push it through 2 faster than 3 can cope.
The only bit of this chain that is out out of our control is 1 and generally by definition if we connect directly the interface cable to 2.
Our goal is always to make 1 the bottleneck
The throughput of 2 will be defined by the operating system, processors and what we do before we spit the data out to 3 (MD5/Sha1/compress).
The performance of 3 is defined by its inherent speed and that of the interface, but also by what we write. If we compress data as we write it then we write less data and as long as the compression algorithm does not cause a bottleneck at 2 then this would/could shift the bottleneck back to 1.
The performance of 3 is also determined by how we write data - if 3 is FAT formated for instance (I know this is unlikely) then as data is written a FAT chain would be updated. There are ways around this - a sparse file could be created and a contiguous disk space allocated big enough for the entire image which would stop the allocated space growing incrementally. Writing to a raw device, i.e. ignoring any operating system would be better still - essentially disk to disk cloning.
All this has been done for many years, and other than writing to multiple devices (which is effectively achieved by writing to a RAID array) the only thing i see that makes this system different from any other is that you effectively have an array of disparate devices connected via different interfaces.
I would be very interested in seeing how this system performs against a similar hardware setup with a RAID array as a destination.