AccessData FTK 4.0: initial impressions


In this post, I will provide some initial impressions and findings.  I do not  endeavor to write a white paper, or to employ an industry standard, scientific methodology to evaluating the tool (if for no other reason than because I am constrained by time).


First, I note that it appears that no one has been able to get FTK to work with PostgreSQL, leading me to conclude that the product was shipped without being tested in this regard.  (If a reader has been able to get it working, I encourage you to post a comment here).   I was not able to get it to work, and I wasted two valuable —otherwise billable— days I had set aside for a client, only to make this discovery.

My review of the AccessData forums indicates identical experiences, and I haven’t found one poster there who yet claims to have finished an evidence load using FTK with Postgres.  (Note: I am unable to determine whether it is an AccessData use-violation to excerpt comments from the private forums, so I am proceeding with an abundance of caution by not doing so).

Get The Latest DFIR News

Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month.

Unsubscribe any time. We respect your privacy - read our privacy policy.

Likewise, I conferred on Friday with another colleague, a lead examiner for a large company, and he replied:

I just had the same experience. I mistakenly upgraded to 4.0, removed Oracle completely, and installed PostgreSQL. That was a mistake . . . some of my run-of-the-mill cases that should only take a couple hours were taking days and had to be killed off. Then, after I removed PostgreSQL and re-installed Oracle I couldn’t get it to forget about the old connection and had all sorts of weirdness with it not finding Oracle some of the time. I eventually backed out FTK, Oracle, and PostgreSQL and did a complete manual cleanup of all garbage files and registry entries and then re-installed everything. I am back to Oracle with 4.0 and things are fine again, but what a mess to deal with this on 3 machines.

And, the same experiences are found on the ForensicFocus forums (e.g.,  Thus, based solely on these numerous anecdotes, and based on my understanding that new purchasers of FTK do not receive Oracle licensing, I have concluded that FTK 4.0 & Postgres is not merchantable (suitable for its intended purpose), although –as noted above– I may be incorrect, and would be pleasantly surprised to be proved wrong.

So, I, too, reverted back to Oracle.   Unfortunately, I couldn’t get the Oracle KFF library for v4.0 posted on AccessData’s FTP site to work.  In browsing through the AccessData forums, neither could anyone else.  AccessData made available to me and certain others who complained a working KFF, which –last time I checked– is not the one available for download at the AccessData FTP site.

Now, like many of the others who have posted to the AccessData forums and elsewhere, I am able to use v4.0 with Oracle.


The three machines I used for testing are, as follows:

(1)  FTK & Oracle server (one box) – SuperMicro X8DTL-6F motherboard, LSI SAS2 2008 controller, two RAID-0 volumes each consisting of two OCz Vertex 3 Max IOPS 120GB SSDs (SATA III – one volume for Oracle data; the other for O/S and the adTemp directory), 24GB of DDR3 1333MHz ECC non-registered server memory, and two Intel Xeon X5650 hexacore processors.

(2) Distributed Processing Engine (“DPE”) #1Asus M4A89GTD Pro/USB3 motherboard, AMD Phenom II X6 1100t hexacore processor (watercooled, but not overclocked), 16Gb of DDR3 memory, O/S residing on an OCz Vertex 3 SSD (SATA III), and the temp directory used by the AccessData distributed processing engine residing on a separate OCz Vertex 3 Max IOPS edition SSD, and the pagefile residing on a Western Digital  Raptor 10K RPM hard drive.

(3) DPE #2 – Hewlett Packard DV6 laptop, Intel core i7 720qm processor, O/S installed on an Intel SATA II SSD, temp directory used by AccessData distributed processing service residing on a separate OCz Vertex 2 SSD (SATA II), 8GB of RAM

Source Evidence Configuration

In an effort to find the fastest evidence load times, I experimented with various combinations of the foregoing.  As a test image, I used a 186GB DD image (ultimately consisting of 1,052,891 evidence items), hosted on a Western Digital 4TB My Book Studio Edition II (SATA II – up to 3 Gb/sec) configured as RAID-0.  I used both KFF alert and ignore, MD5 & SHA1 hashes (but not SHA256 or “fuzzy” hashes), expand compound files, flag bad extensions, entropy test, dtSearch index, create graphics thumbnails, data carve, meta carve, registry reports, include deleted files, and explicit image detection (X-DFT & X-FST).   Using oradjuster, I tweaked the SGA_TARGET parameter to use only 18% of available physical memory during evidence processing.

Distributed Processing

Before continuing, I’d like to mention a few things about the Distributed Processing Engine, which are hard-learned lessons from either failing to read the user guides and appendices, or from experimentation:

(1) To get DPE working, you must have a system share as the path to the evidence in the Add Evidence dialogue box.  Without it, no distributed processing will occur.  Likewise, you need to have the Oracle working directory on a public share, the FTK-cases directory on a public share, and all systems using mirrored accounts (which Microsoft defines as “a matching user name and a matching password on two [or more] computers”).   You also need to disable any Windows firewalls or other firewalls.  Tip ►  An easy way to make certain the port on a DPE is reachable is to install PortQuery v. 2, and run the command, “PortQry -n {machineName} -p tcp -e 34097” where “machineName” is the name of your DPE, and where 34097 is the default port (configurable in the distributed processing configuration menu). AccessData ought to include a “test connection” button in the distributed processing configuration — it would probably save their help desk a lot of e-mails and calls.

(2) And, although the v4.0 System Specification Guide discusses how to configure the adTemp directory on the localhost processing engine (which directory should be located on its own, high i/o throughput drive, because it is the interface between the processing engine and Oracle), I have found no discussion about how to optimize the DPE machines.

Tip ►  On a Windows Vista or Windows 7 machine, if you are logged on as “farkwark,” the distributed processing engine will write its files to Users/farkwark/appData/Local/Temp.   To relocate this /Temp directory to a different drive, you need to create a junction, as follows.  First, log off and log on as a different admin account user.  Next, move (not copy) the /Temp directory to the different drive (say F:).  Rename it, if you like (e.g., “FTKtemp”).   Now, from a command prompt, type:

mklink /j “Users\farkwark\appData\Local\Temp” “F:\FTKtemp”

From this point forward, the processing engine will, in fact, be writing its temporary files to the F:-drive, thereby not competing with the O/S drive for i/o.

Hardware Configuration Experiment No. 1: FTK & Oracle Server + DPE #1 & DPE #2

With this three-machine configuration, I rarely saw the FTK & Oracle server’s 12 cores (24, if one counts hyperthreading) get above a collective 15% of load.  DPE #2 (the HP laptop) processing load reached near 100% several times, with up to 6 of 8GB available memory in use. DPE #1 reached between 50-80% CPU utilization with extended periods of low utilization, and about 8GB (of 16GB available physical memory). Total time was 11 hours, 24 minutes.

Hardware Configuration Experiment No. 2: FTK & Oracle server (one box implementation), alone

With this one-box configuration, the dual xeon hexacore CPUs were pegged for extended periods of time at 99 or 100% (note, this differs from the experience of others, who have written,”We have multi cored, multi processor CPUs on our systems. What we’ve found is that typically, unless we are password cracking, that the I/O from the disks can’t keep up with resources available. Meaning our CPUs are never maxed out. So the CPUs are not the bottleneck for getting more speed“). The FTK/Oracle server used up to 20GB (of 24GB available) of physical memory (recall that SGA_TARGET was set to 18%).  Total time elapsed was 9 hours, 4 minutes, an improvement of 20.5% over using a three-machine distributed processing configuration (no, that’s not a typo).

Hardware Configuration Experiment No. 3: FTK & Oracle server + DPE #1

With this two-machine configuration, the FTK Server’s CPU utilization was rarely  above 40%, only occasionally reached 60%, and most usually was between 5 to 25%, and used between 8 to 12GB (of 24GB available) of physical memory.  Meanwhile, DPE #1’s CPU utilization was pegged at 99 to 100% for extended periods of time, and used up to 10GB (of 16GB available) of physical memory.  Total time elapsed was 8 hours, 33 minutes, a 25% improvement over the 3-machine distributed processing configuration, and only a 5.7% improvement over using the FTK/Oracle one-box solution.

Hardware Configuration Experiments Conclusion

Based on the type of hardware I am using, I found very little benefit (up to 6% processing time improvement) and, in fact, some detriment (over 20% in processing time loss) in stringing together numerous DPE workstations.  My experience is inconsistent with AccessData’s findings of processing time differences between stand-alone boxes and distributed processing clusters (see

Add-on Modules: Data Visualization & Explicit Image Detection

Initially, I thought the Data Visualization module did not work. No matter whether I attempted to view a directory containing several score of files, or the entire 1 million + items, it never displayed more than a handful of results (sometimes zero or one, resulting in a pie graph that was just one big green circle).   Turns out (of course) that It was my fault for failing to “select” the appropriate date range.  Had I read the manual first, I would have noticed, “Information can only be displayed for the date that you have selected.” FTK 4.0 User Guide at 200.  Apparently, when the Data Visualization tool first opens, it defaults to one day (the first day of the oldest evidence in the list) –not very inuitive.

AccessData claims that the Data Visualization add-on component “provides a graphical interface to enhance understanding and analysis of cases. It lets you view data sets in nested dashboards that quickly communicate information about the selected data profile and its relationships.”  Among other things, it purportedly provides “a complete picture of the data profile and makeup,” empowers the examiner to “Understand the file volume and counts through an interactive interface,”  and “Create a treemap of the underlying directory structure of the target machine for an understanding of relative file size and location” (similar to, but not as elegant as WinDirStat).

In summation, the tool appears to work as designed, although I haven’t done any substantive reporting off of it.  One user posted on the AccessData forum that there appears to be no way to export the graphs in to a report, but this can be easily remedied by taking a screen clipping using SnagIt, Microsoft’s OneNote, or a screen print.

Also, I have been experimenting with the EID. AccessData states, “This image detection technology not only recognizes flesh tones, but has been trained on a library of more than 30,000 images to enable auto-identification of potentially pornographic images . . . AccessData will continue to integrate more advanced image search and analysis functionality into FTK. Customers who have added the explicit ID option to their Forensic Toolkit® license and are current on their SMS will automatically receive those new capabilities as they become available.” Notwithstanding this commitment, it appears that no additional functionality has been added since its release with FTK 3.0.  I also note that the technology is unlike Microsoft’s “PhotoDNA,” which is reputed to process images in less than five milliseconds each and accurately detected target images 98 percent of the time, while reporting a false alarm one in a billion times. Comparatively, AccessData’s EID has been found to achieve 69.25% effectiveness with 35.5% false positives. Marcial-Basilio, Aguilar-Torres, Sáchez-Pérez, Toscano-Medina, Pérez-Meana, “Detection of Pornographic Images, 2 Int’l Journal of Computers 5 (2011).

My experience reveals many false positives (such as, “small_swatch_beige.png,” an image consisting of a plain beige coloured box, ranking at the very top of the list compiled by the X-ZFN algorithm, which is supposed to be the most accurate of the three algorithms), and seems to confirm that the algorithm is based on the presence of flesh tones (and nothing more, unless your system has one or more of the 30,000 images that became part of the library at the time the tool was introduced). Nevertheless, if one is short on time (and many law enforcement agencies’ examiners are), the tool does certainly help to reduce the data set that requires manual review.


March 21, 2012 update

Upon reading this article, AccessData’s President, Brian Karney, contacted me by e-mail, seeking an “opportunity to work with you and get to the bottom of the issues you have identified with the product,” and that “PostgreSQL absolutely does work . . . we spent a very long time making sure it was  solid and working before making it available to the community.”

At the outset, I note that I called this matter to AccessData’s tech support several weeks ante, complaining that I concluded PostgreSQL hadn’t been tested with 4.0, and that it seemed [to me] that users were being used to serve as AccessData’s UAT testing department.  Although I requested a call back, and although I separately mentioned to my sales rep that I might be working on a review article, the tech support complaint was disregarded (i.e., no reply, whatever, whereas other e-mails with specific questions, bug reports, enhancement requests, and documentation feedback, were all or most answered).

Nevertheless, I appreciate Mr. Karney’s attempt to set the record straight and, as is evident from some replies to this review, some users haven proven me wrong, reporting they have been able to complete evidence load processing using both v4.0 and PostgreSQL.   Karney indicated that he had directed his staff to comb the AccessData forums to aggregate the complaints from other users and to conduct an inquiry in to the matter.


April 27, 2012 update

Upon consideration of some of the comments, I made a few hardware investments and configuration changes, and have the following results to provide:

  • I doubled the RAM from 24GB of non-registered ECC 1333MHz memory to 48GB of registered ECC 1066 MHz memory (an approximately $500 upgrade)
  • I reconfigured the four OCz Vertex 3 Maxs IOPS 128GB SSDs to be a single RAID-0 volume
  • I purchased two OCz Vertex 4 128GB SSDs, based on Marvel controllers with Indilinx firmware (an approximately $400 upgrade), to configure another separate RAID-0 volume for the C:-drive.
  • Both RAID-0 volumes resided on one LSI 2008 SAS2 controller
  • between tests, I logged off of Windows to allow the garbage collection routines to run overnight on the SSDs, because TRIM commands are not passed through LSI RAID controllers (see Les Tokar, Garbage Collection and TRIM in SSDs Explained (April 16, 2012)).
  • All other hardware is the same as described hereinabove
  • I upgraded to FTK 4.0.1
  • I set the process priority to “high” via task manager for both ADprocessor and ADloader on the processing engine server and, if applicable, the distributed processing engine.
For the first several tests, I did not check the Oracle memory settings in the Windows registry.  I assume, however, that by running Oradjuster.exe, the additional memory would be detected and changes made accordingly.  The image source drive, source image file, and file processing options were all the same as in the prior tests (above).

Note that for test #3, below, I tried moving the adtemp directory to the same 4-disk RAID-0 array used by Oracle.  I did this to test the theory advanced by several SSD reviewers that the OCz SSDs greatest benefits are realized with higher queue depths.  After noticing a slight performance decrease, I moved it back.

Test #1

  • No distributed processing engines configured (one-box FTK processing engine & Oracle server combination)
  • sga_max_size = approx. 19GB, sga_target = 18%
  • C-drive (two Vertex 4, RAID-0):  configured for operating system, FTK cases folder, and ADtemp directory
  • E-drive (four Vertex 3 max IOPS, RAID-0): configured solely for Oracle
  • Total Job time = 08:56 (processing = 04:51, postprocessing = 00:06, indexing = 08:54)

Test #2

  • No distributed processing engines configured (one-box FTK processing engine & Oracle server combination)
  • sga_max_size = approx. 19GB, sga_target = 37%
  • C-drive (two Vertex 4, RAID-0):  configured for operating system, FTK cases folder, and ADtemp directory
  • E-drive (four Vertex 3 max IOPS, RAID-0): configured solely for Oracle
  • Total Job time = 08:21 (processing = 06:12, postprocessing = 00:12, indexing = 08:20)

Test #3

  • No distributed processing engines configured (one-box FTK processing engine & Oracle server combination)
  • sga_max_size = approx. 19GB, sga_target = 37%
  • C-drive (two Vertex 4, RAID-0):  configured for operating system and FTK cases folder
  • E-drive (four Vertex 3 max IOPS, RAID-0): configured for both Oracle and ADtemp directory
  • Total Job time = 08:45 (processing = 06:03, postprocessing = 00:18, indexing = 08:44)

Test #4

  • DPE #1 used
  • sga_max_size = approx. 38GB, sga_target = 37%
  • C-drive (two Vertex 4, RAID-0):  configured for operating system, FTK cases folder, and ADtemp directory
  • E-drive (four Vertex 3 max IOPS, RAID-0): configured solely for Oracle
  • Total Job time = 08:11:17 (processing = 08:04, postprocessing = 00:06, indexing = 08:10)

Test #5

  • DPE #1 used
  • sga_max_size = approx. 38GB, sga_target = 15%
  • C-drive (two Vertex 4, RAID-0):  configured for operating system, FTK cases folder, and ADtemp directory
  • E-drive (four Vertex 3 max IOPS, RAID-0): configured solely for Oracle
  • Total Job time = 06:47 (processing = 06:03, postprocessing = 00:40:53, indexing = 06:46)


$900 of hardware investment (doubling the memory, and adding more SSDs to the RAID-0 volume) provided a 43-minute (7.9%) performance increase over the 9 hour 4 minute single-box best time in my first tests. Using the same Oracle memory configurations, running a single DPE provided a negligible 10-minute (2%) improvement over running a stand-alone FTK one-box solution. But, by then changing sga_target to 7.7GB, total processing time was reduced by an additional 17% to less than seven hours for the same image file.

More experimentation, such as adding a PCEe OCz RevoDrive X2 card, separate PCIe RAID controller cards for each RAID volume, Windows dynamic disk configurations, faster processors (e.g., x5690 Xeon processors, rather than x5650), or a differently configured DPE, may yield better results. With this hardware configuration, there is some added benefit to lugging an extra DPE computer & monitor to a remote job site, but –as my March, 2012 initial tests suggested– certain DPE configurations can actually impair performance.

32 thoughts on “AccessData FTK 4.0: initial impressions”

  1. Background info: I am a student doing an internship and have been here since January. I don’t know anything about how things worked before I got here, nor do I know how things are “supposed” to work. 🙂

    I upgraded our FRED machine from 3.4 to 4.0, with PostgreSQL. There are some remnants of Oracle on the hard drive, but whoever upgraded/installed 3.4 had already made the switch to PostgreSQL. This is a Windows 7 64 bit machine, 32 GB RAM, Xeon X5460 (x2).

    I ran into one issue immediately after installation. A “feature” that was demonstrated during the World Tour – using Windows credentials to log in, thereafter bypassing the log in prompt – causes it to stop accepting the password, leaving you locked out. A quick call to tech support and I was emailed instructions on how to reset the Postgres password and I got that all fixed up. I also installed the KFF for version 4 with no issues.

    I created a test case and added a 160 GB hard drive through the Tableau bridge as “live” evidence. Processing/Indexing that drive took 6 hours. There were 522,371 items on this disk. Running the KFF afterward took 2 hours. I then made an E01 image of the drive using EnCase and added that. Indexing the image file took around 7 hours.

    So, yes, there is at least one person who has successfully processed an image using FTK4 and PostgreSQL…that being me. I will hasten to add that I don’t really know what I’m doing – all of this stuff is new to me. I think I just used the default options, so there may be something that I didn’t do that would have crashed it.


  2. Sean,

    I haven’t done comparison processing tests with 4.0 yet but I did do quite a lot with 3.4.1 using DPE’s and found considerable difference depending on how the host and DPE’s are configured. I’m using Oracle and a Dell 6850 with quad Xenon’s that show 16 cores processing and 72 GB RAM. Two 15k RAID 0 SAS drives for Oracle. Some preliminaries: I found processing is twice as fast when using an E01 image than a dd image so I always process an E01 image. Through extensive testing I found the most improvement in processing is the I/O of the image file hard drive. Not the Oracle drive as recommended by AccessData. So when I really want speed, I put my image file on the fastest drive (or RAIDed drive set) I can find. When I get an SSD drive large enough to hold an image, I’ll use that.

    In my testing I found the best performance was achieved by setting Orajuster to 40%. When set at 10%, the 40% setting was 2.8 times faster. In addition, I achieved the fastest processing by going into Task Manager on the host and changing the CPU priority on ADLoader.exe and ADProcessor.exe from Below Normal to High and going into each DPE and changing the CPU priority on ADProcessor.exe from Below Normal to High. CPU utilization on the host and DPE’s jumped substantially and network traffic between the host and DPE’s jumped from 5-6% to 40% and processing time dropped another 28%. These are the best tweaks I could find for improving processing.

    So I’ve found adding on three very fast DPE’s improves my processing time by 3.7 times over using the host machine alone but only with the tweaks set above. In that regard I feel the cost of the three DPE’s is justified. You might try reprocessing using 4.0 and the tweaks I’ve found and see if you get similar results. A word of caution, I’ve had unstable results setting all the inso.pipe.helper.exe’s on the host to High CPU priority so I leave those alone. I’m not able to change the priority on ftk.exe or oracle.exe so I can’t test those. But watch what happens when you change the priorities on ADProcessor on your host and DPE’s. I’d like for AccessData to change the default priority or to allow us to set the default priority.

    • Randall:

      Thanks very much, indeed, for sharing your optimization tricks, which I will include in my subsequent experiments! For my limited exerimentation, I chose 18% on the Oradjuster allocation because, according to AccessData’s User Guide for a one-box deployment (i.e., configuration with processing engine, user inteface, & Oracle on same machine), “The . . . allowable range [for SGA_TARGET] is typically between 10% and 50%. Enter a percentage in the lower half of the allowed range.”

      I had resolved to increase from 24GB to 48GB and re-run the experiments, and also to fiddle with the SGA_TARGET value and record the results. I also want to try out an OCz RevoDrive 3 x2 PCIe card, as suggested by David Cowen in his related blog post earlier this month (

      I recall, also, that Digital Intelligence conducted some benchmarking ( and concluded, “Increasing the speed of the system CPU has minimal effect. FTK 3.0 appears to benefit primarily by increasing I/O performance.
      Increasing the amount of system memory is somewhat more effective then increasing the speed of the CPU but is still relatively marginal . . . The ‘Oracle’ drive appeared to benefit most by using a storage device capable of delivering rapid random I/O performance. Some of the fastest processing times were achieved when the SSD drive was used as the ‘Oracle’ drive. It should be noted that putting an SSD in the O/S or Case drive position had minimal effect.” I note that SSD drives have come a long way since January, 2010, and they were using dual Xeon 5420s in their tests (first-generation Xeons). I also need to explore the fact that –as far as I know– trim commands are not passed on to SSDs used in RAID configurations (but are passed on by Windows 7 when the drives are in AHCI mode), which means that the drives will experience performance degradation over time, unless they are “refreshed” using OCz Tool.

      Finally, I note that Cowen also agrees with your finding that speeding up the evidence source drive has a significant effect. However, when doing criminal defense work under the Adam Walsh Act, examiners don’t have a choice about the source media or image type, which is usually provided on a single 7,200 rpm drive (unless the examiner brings additional media, and uses something like VoomTech’s HardCopy 3p to clone the image onto faster media (although I don’t think the HC3P can write to RAID enclosures, last time I checked)).


      • Sean,

        For my experiments I used an E01 image of a 40 GB drive to reduce repeated processing time. I tried Orajuster settings of 10%, 30%, 40% and 55% and found increasing performance up to 40% but a significant decrease at 55%. It’s hard to tell if the 40% was just optimal for my setup or if others will see similar performance. This needs to be tested experimentally by others. But, my experiment shows the advice from AccessData wasn’t optimal for my system. Same regarding hard drive I/O. I was able to experiment with moving the Oracle database, FTK case file, and evidence image file around on my computer from the slowest to the fastest drive I/O system. In doing this I found my fastest performance wasn’t with where I put the Oracle database (as recommended by AccessData), the case file, or the ADTemp file but where I put my image file. Again maybe this is due to my hardware configuration but I would love to hear about others moving things around and what they experience. I don’t know why my findings are at odds with AccessData’s recommendations. It would be nice if they published their test parameters and their test findings with each configuration. Since they don’t, I’d like to see independent testing and publishing of different configurations and their performance. What would be helpful is if AccessData and all of us testing worked with the same test image. That way the only variables tested would be hardware and software configurations.

  3. Like Andrew, I also upgraded from FTK 3.4 to 4.0, with PostgreSQL on a Windows 7 64 bit machine. So I’m yet another person who has successfully processed a coupe images using FTK4 and PostgreSQL..


  4. I installed FTK4 on a fresh install of Windows 7 x64. The only issue I had was the J# component wouldn’t install thru AD installer but I got around that by downloading it directly from Microsoft site. After that everything else installed and is working fine. Now onto putting it thru the normal case work.

  5. I also had no problems with PostgreSQL on a fresh Win7 x64. The only problem I know of is that the installer let’s you continue if you don’t type a password for the PostgreSQL and fails to start the service.

    • I also have had no issues with using postgresql , both from fresh installs and upgrades and this is for FTK 3.4.1/3.4.2 & 4.0 .
      After extensive performance testing of different configurations I primarirly use Oracle as it’s on average 30% faster.

      • It’s encouraging to see that AccessData upper management has gotten involved in this issue. We FTK users do not need the negative feelings that surrounded FTK 2.0 repeated. So it’s more than just an issue of sales. It affects all of us if the product is viewed negatively.

        I’m using FTK 4.0.1. I liked the Postgres installation because it was easy and trouble free. I processed two small cases without difficulty. Then I wasted many hours with a large image that hung up using postgres. I went back to “GO” and installed Oracle. It processed that same large case without incident.

        Some unrelated issues came up and much to my regret, I went back to Postgres. I had a different large file and just as experienced previously it locked up after many hours of processing. I subsequently went back to Oracle and processed the case without incident.

        There have been a number of variables discussed in this thread, but one that I see is “size”.

        I don’t know what the answer is for new subscribers, but I’m sticking with Oracle.

  6. I am using FTK 4 with Postgres (previously used 3.4 with Postgres) and the biggest issue I’m running into is REALLY long loading/processing/indexing times. I’ve gotten to the point where I load each piece of evidence into its own case, which seemed to help on my last exam, but right now I have a 250GB dd image that is going on almost 2 days of loading.

    For those who have used E01 files, are they compressed at all? I may try converting my image to E01 and load that to see if it improves my results.

    • Tara, Read my two posts above to vastly improve your processing speed. But to summarize, 1. An E01 image processes twice as fast as a dd image. 2. DO NOT process your image off an external USB drive unless you’ve got days or weeks to kill. Use the fastest internal I/O drive system you can afford. 3. Disable all background jobs (especially antivirus and Windows Update) so your cores have full attention on your processing. 4. Have at least 2 GB RAM for every core processing and 5. Go into Task Manager, Processes, right click on ADLoader.exe and ADProcessor.exe and change their CPU Priority from Below Normal to High. Then process in stages: I-MD5, SHA1, KFF, and Flag Bad extentions. Back up case. II-Data and Meta carve, Expand Compound files. Back up case. III-dtSearch and Entropy test. Back up case. IV-Any other processing necessary.

      Then, if you want to speed up processing another 3.7 times, you need to add three fast DPE’s on a gigabit network and when they start processing go into Task Manager and up the CPU priority on their ADLoader.exe and ADProcessor.exe as well. Then you’ll get some screaming processing performance. FTK 4.0 takes hardware, speed, data I/O and process tweaking to get the fastest processing. But if you can get it all together, it blows the doors off its other major competitors.

      • Randy, got a question: if you do the carving and processing of unallocated disk and file slack and expanding on compound files *after* your stage I processing (MD5 & SHA1 hash computations, and KFF comparisons), then none of the carved items and expanded files will be hashed and compared against the KFF, correct?

      • Sean, That is correct if you don’t also check MD5, SHA-1 and KFF at the same time you do Expand Compound files and Data Carve. Checking to add these will only apply to the new carved or expanded files because previously processed files are ignored. The alternative, one that I prefer, is to go ahead and carve or expand and not do hashing or KFF checks. Many times in my cases I don’t care to identify duplicate files or KFF alerts. But if I do, the last processing I do is to run just hashing and KFF. This is pretty fast because previously processed files are ignored. I primarily run hashing and KFF on initial processing just to make sure there are no corrupted files or problems that will cause FTK to crash. I’d rather it crash on initial simple processing rather than down the road after a lot of work. I have to say I have fewer crashes with 4.0.1 than with previous versions. I still separate processing and back up after each stage so I can recover quickly if needed.

    • Brad, Valid concerns, especially related to personal favorites. The way to prove if either of these is at play is to do extensive empirical testing yourself. Each program must be configured and used optimally which in some cases is contrary to the manufacturer’s recommendations. I’ve found that just “using” a program may not get you the best performance. So, experiment with different hardware, different settings, different databases (if that is an option) and different distributive processing machines with each program until you find the best optimal performace and then compare the optimal performance between programs. Then publish your unbiased observation of which program performed better here for all of us to see. I would look forward to such a comparison.

      • Additional testing results posted today and more to come this evening. Note, I have never used EnCase, and, therefore, have no “favorites.” I’ve been reading very bad things from users about the latest release of EnCase, in fact. The only firm tool-based opinion I really have is that no examiner should reasonably expect to rely on one Swiss-Army-knife tool. For example, FTK may carve out JPEG temporary Internet files –but without MAC times– from a System Volume Information folder. If you want to get a more defensible idea of when and how those files were downloaded, and by whom, you’re going to need a separate tool to mount Volume Shadow Copies, and you’re going to need yet another tool (such as NetAnalysis) that can read Firefox or Chrome databases to get you the metadata you need — otherwise, all you have is JPEG files with a possible date “range” when they were acquired.

  7. I have installed FTK without issue. I notice a significant speed increase over 3.4.

    I am running FTK 4.0103515 on the following: Dell T710 48GB Ram – Windows 7 Pro 64bit – RAID 0 – PERC 6/i

  8. Accessdata says that they have no Oracle KFF other than the one on the website. They were going to look up your ticket to see what was provided to you. Can you provide other information that will help Accessdata support locate the Oracle KFF for 4.0?



    • Jim, on Feb. 16th, 2012, I was personally instructed to obtain the KFF file not from the usual location, but rather from with a user account of “bmcustdl1” and a password that I will not include here. ref:_00D308tv._50040INqNf:ref

      Also, there’s a March 9th thread on the AccessData forums (subscription required), entitled “Unable to install the Oracle KFF to 4.0-Fixed” On February 20th, Brian Karney (AccessData Pres.) posted, “Here is the install until we have it on the download page . . . http://ftp.acessdata.dom user: bmcustdl1 pwd: ******** . . . Sorry for the issue.”

      And, on March 9th –the final post– a user replied, “Why do I have to hunt through forums for this info.?”

      If you are now being told that “no Oracle KFF other than the one on the Web site” exists, and if that assertion is accurate, it likely is because they finally updated what is on the ordinary download site with whatever KFF file they individually provided to me and to the AccessData forum users. Note: I have not done a hash value comparison to determine if the file now available on the ordinary download site is the same that was provided to me.


  9. You made some good points. I think it is critical to have your AD temp writing to a fast SSD or SSD raid. My best processing times came when I used an OCZ 950 GB Ibis card for Oracle, three separate SSDs for OS, AD temp, FTK case, and put the evidence on a fast SSD Raid 0 (or a TI RamSan card), with SGA_target at about 55%. I still do not peg the CPUs for very much of the processing time(Macpro 2×6 core, Win7, 68 GB DDR3 RAM). Your process priority tips seems to help but I still can not get FTK (3.4.1)to run as fast as it should.

    • Rich, are you running the RamSan-70 or the RamSan-80? Also, regarding your sga_target value, did you also modify the sga_max in your registry, or leave it at the default? What is the current value of your sga_max? Sean

  10. Is there anything similar like “Oradjuster” for Postresql? I have upgraded my computer’s main memory from 4 to 16 GB and I want to know if FTK will be able to use as much as it can.

  11. In your article you state ” Likewise, you need to have the Oracle working directory on a public share, the FTK-cases directory on a public share,” – I wanted to ask where you got this?

    I am working on getting DPE working and have the evidence on a share and the case folder on a share but saw no mention of this. I ask because I am now getting “Cannot connect to the database” errors in the jobs window.

Leave a Comment

Latest Videos

This error message is only visible to WordPress admins

Important: No API Key Entered.

Many features are not available without adding an API Key. Please go to the YouTube Feeds settings page to add an API key after following these instructions.

Latest Articles