Okay, initial impressions are the new GUI is well laid out and I like the amount of information supplied to understand what each step and function is for.
Not too many chat or apps listed as supported but the most common ones you would look for are there (facebook, twitter, youtube etc).
Here's the kicker though….
I pointed it at an image of a 600GB hard drive and selected disk search, understandably that is going to be the most time intensive of any search, sector by sector on the whole drive, but after 18 hours it's only at 16%.
The same drive image (whole disk) was processed by IEF in just shy of 9 hours.
I'll with hold any further comment until I see the content that IXTK finds because I can easily overlook a lengthy processing time if the content found at the end is amazing.
Comparison finished, and while one test job is far from an exhaustive test process the results were not sufficient to inspire me to further testing.
Analysis machine specs
Win7 x64
Core i7 CPU
64gb RAM
RAID 0 processing drives
WD Black drive with source image
IXTK
Processing time - 42 hours
Total artifacts recovered - 50k (according to the dialog box at the conclusion)
On attempting to open the results there was an 'out of memory' exception error and IXTK crashed. On reload the case opened smoothly however now there are only 26k records being shown.
Artifacts
IE - 2796 (all history related and all appear to be google mail URLs)
Facebook photo URL - 18
Facebook ad URLs - 63
There are also 26629 artifacts listed under the generic heading 'My artifact searches' and when I display them they seem to be largely carved JPG files, randomly checking them I couldn't get any pictures to be visible.
Magnet IEF
Processing time - 9 hours
Artifacts recovered were
Classified URLs - 8841
Cloud Service URLs - 651
Facebook URLs - 696
Various Google Analytics cookies - 240
Google Map Queries - 258
Malware/phishing URLs - 8
Parsed Search Queries - 4798
Rebuilt Webpages - 659
Social Media URLs - 1049
Chat
Skype accounts - 1
Skype Calls - 59
Skype Calls Carved - 84
Skype Chat Messages - 180
Skype Chatsync Messages - 6
Skype Chatsync Messages Carved - 6
Skype Contacts - 27
Skype File Transfers - 1
Skype Group Chat - 2
Skype IP addresses - 42
Google Docs - 58
Gmail fragments - 122
Gmail Webmail - 1801
Carved Video - 1109
Facebook Pictures - 3
Pictures - 101988
Videos - 189
Web Video Fragments - 2
Ares Search Keywords - 5
Facebook Pages - 30
Facebook Status updates - 14
Browser Activity 12888
Chrome
Bookmarks - 3
Cache Records - 5332
Cookies - 274
Current Tabs - 5
Downloads - 1
FavIcons - 235
Keyword Search Terms - 8
etc..etc..etc..
I can't be bothered typing anymore but I think the results speak for themselves.
To be fair, your comparison is very subjective because you failed to complete the test. Validation of the results is not discussed as well. Unless you took the time to fully understand how to use IXTK, then your (plain view) observations and time results are not unusual.
If I run EnCase (which I am fan of) across every partition in a hard drive and commit to that search all of the EnScripts at my disposal, then it would be interesting to see if it can deliver it's results within 9 hours. Unless you are testing and comparing very specific search features one-for-one, then any departure from that skews the results and is a false representation of what the software might be capable of.
One thing we do that might be different than the other tool, is that we employ the use of GREP expressions throughout our software because it has the propensity of finding more evidence. All of our Keyword and Trace Artifacts utilize GREP expressions. On top of that, our artifact framework employs validation expressions (only for some trace artifacts) so as to eliminate as best as possible any false positives. IXTK is very mindful about false positives because we don't want users wasting time sifting through meaningless data.
Listening to users' feedback and observing public comments, it appears that more people today are settling for push-button tools because they are 'easy' to use. While this might be convenient and time saving, this approach has been often viewed by others in the industry as "data mining" more so than "data investigation". For this reason, we've taken a different approach. We want users to "investigate" the evidence and be able to "validate" it easily within the tool. We want users to be able to confidently take the stand in court with IXTK in hand, with the original carved evidence, and validate the evidence right there on the spot without hesitation.
With IXTK, we've given users the tools and the ability to search multiple devices at the same time. So if a user wants to throw caution to the wind by blindly selecting to search ALL devices, and ALL volumes, and ALL artifacts, and ALL keywords, and ALL trace artifacts… at the same time….WITHOUT taking advantage of the Exclude options we made available or WITHOUT scaling down the search, then that is entirely their choice. But if they do, then it should not come as a surprise if it takes a long time.
Some tools probably look for static keywords like "John Doe" and call it a day, and that's why they might appear to be SO MUCH FASTER than other tools. IXTK takes the approach of using GREP expressions so as to attempt to find variations of the same data. Example "JohnDoe", "johndoe", "john doe", "johnDoe", "john\x0D\x0Adoe", "john_doe", "john-doe", "john.doe" etc. So in this very simple example, another tool might take .005 seconds to find the static, case sensitive term "John Doe". However, IXTK might take .5 seconds to find the other variations. This might be a bad example but it conveys the point.
Since IXTK Keywords and Trace Artifacts utilize some pretty long GREP expressions, then imagine how many other permutations have to be evaluated! Every check box selected only exponentially increases search time. But don't be fooled. This goes with any forensic tool that offers the same flexibility. Perhaps the only problem we made is that we put all of the keywords and trace artifacts together in an easy to locate tree view control. If a user checks only the ROOT node of the list for both, then voila! Get out the paint and watch it dry because you WILL be waiting around. For this reason, in our instruction videos and in the classroom, we always advocate a more sensible, surgical approach to forensic examinations of Internet evidence. But, with IXTK 5.11 Beta, I think we might have made a bit of a compromise.
If a user targets the C\Users directory on a volume (which is where most data resides on a Windows machine), and then they search for "browser artifacts", they WILL get fast results.
Having said that, I concede that IXTK 5.5 could have been optimized in many ways to make it easier for the untrained user to use. For this reason, we took a long hard look (2+months) at how IXTK was approaching the problem. I personally spoke to several seasoned practitioners in the industry and I asked them how they were doing their cases from start to finish. I asked them to be very candid with me on how they thought we should be approaching the problem and we listened. I listened.
Last night we released Version 5.11 Beta which completely changes our approach to the problem. Now, users only really have 2 search options PARSE FILES or CARVE DATA. When we actually looked at our 3 types of artifacts (FILE, KEYWORD, TRACE), we realized that 9 times out of 10, people were after the FILE artifacts, first and foremost. Keywords and Trace artifacts (fragments) came next (which were the more time consuming).
With the PARSE FILES option, IXTK now only searches for file name matches on a volume. Moreover, we've add powerful INDEXING up front which dramatically decreases the search times. Why? Because with Indexing, you can index an entire volume in just minutes and pre-identify artifacts (by group). Once indexed, they remain indexed. So for example, if you have a volume with 1,000,000 files and you run the INDEX in IXTK looking, you can scope down your analysis to only a few thousand files (e.g., 5,000).
INDEXING in IXTK 5.11 means that users no longer have to worry about cherry picking files and folders in a tree view control, each time they start a new search. Now, they can go back to the indexes and search only those files. At the present time, indexing is restricted to Browser Files, Pictures (using file signatures too), Videos, Deleted Files, SQLite Databases and Cloud/P2P files. While the P2P files are indexed, IXTK can only process these using Keyword or Trace artifacts. Now that this major overhaul is pretty much complete, we are turning out full attention on artifacts.
Users can now expect monthly updates to our artifacts library starting with the likes of Instragram, WhatsApp, Snapchat, etc. As these new artifacts arrive, there will be fewer arguments over 'quantity' of artifacts found when running forensic tool comparisons.
But one thing I would like to mention while I am here. IXTK has all of the implements to filter down on evidence as much as a user would like. The ability to create child records from case records makes it a very powerful and more tangible option to bookmarking, even though we support bookmarking too. Also, everything is integrated. Our search tool is integrated with our analysis tool which is integrated with our reporting tool etc.
To this point, when we recover Google Chrome history for example, we don't split hairs by reclassifying the same artifact over and over so that we can add it to the queue and add a number beside it. Doing this only sensationalizes the results which can be a dangerous thing in the eyes of the non-techy person (like prosecutors).
IXTK takes a more sensible approach. Using internal (case) Keyword Searching, Dictionary Terms creation, Internet Search Terms Dictionary creation, filters and labels and tagging, users can group and quantity any existing pieces of evidence. You want to find all fo the "facebook" artifacts? (chat, fragment, url, photo), just use our dictionary. Why take the one URL and r-classify it differently? Just so we can call it 5 artifacts instead of 1? I suppose if we wanted to sensationalize the results, we could to do that. But we don't.
If you really want to do an experiment, try this
1. Empty your Chrome cache on your forensic machine.
2. Open up Chrome and go to Google.com.
3. Type in "how to build a deck out of wood" and hit ENTER.
4. From the search results, click on only a handful of the results but TYPE NOTHING into the address bar.
5. Now use your favorite Internet evidence tool and parse the History.
6. How many "Search Terms" to yet get back?
7. Are these results being forwarded onto investigators or prosecutors to act on? Hmm..
The Google search term here is "how to build a deck out of wood". Total number of search terms = 1. Are you possibly surprised by the results of your other tool?
Anyway, I appreciate the opportunity to address your findings, despite the tone in which they were reported. I admit that IXTK, being a completely different tool than other tools, has many ways of approaching the search and discovery process. Without a more targeted approach, IXTK (Version 5.5) WILL undoubtedly seem like it takes forever. To that point, I recognize that it shouldn't be the "user that changes" everything. In this case, it's the tool that needed changing and I think we've struck a good balance with our changes in Version 5.11 Beta released yesterday.
We will be building further on our use of "indexes" and we are definitely "speed conscious" from this point forward. I am particularly looking forward to bringing new artifacts to the software. In case readers did not know this, IXTK is not just a tool for finding artifacts. It includes FaceDNA biometric facial recognition. integrated live real-time browsing and investigations with data capture capabilities, and case management tools.
Signing off.
I appreciate the somewhat belated defense of your tool and I think you've taken my tone as an attack when it was merely an observation of my personal experience. However as you have taken the time to dissect my post and respond I will do you the same courtesy.
While I didn't validate every single artifact recovered I did make attempts to view the data recovered by IXTK but was unable to view much of it (as stated in my post). I didn't view every single artifact recovered by IEF either so from that point yes you can say the test wasn't completed, but then I wasn't setting out to test exhaustively just a quick check to see if IXTK could live up to the claims.
Your comparison to EnCase is irrelevant as I did run IEF with every single option selected on a disc image, as I did with IXTK, so in so far as it was possible this was a like for like scan being whole disc image with all available options selected. I would point out that IEF had significantly more supported artifacts. Your assumption that the time difference may have something to do with the extended search capabilities of IXTK on the surface sounds great, but if the search is so much more thorough I would expect to see more results, and only 'positive results', instead I saw very little and lots of false positives that could not be viewed.
I applaud your encouragement for users to "investigate" and "validate", and had I been able to find any evidence with IXTK I would have used another tool to validate it, as is good practice. I would shy away from trying to interpret what "the community wants" because as part of that community I can tell you I want tools that are functional and work "as advertised", push button or not.
IXTK is not alone in the ability to search multiple devices and partitions, and as I described above in this test both tools had all available options selected on a disc image, so as close as possible this was a like for like test.
Your John Doe point might hold some validity if there were more results found by IXTK, not many thousands less.
I understand your desire to defend the tool, but my personal experience with the tool is that it fails to deliver as promised, it is incredibly slow and clumsy, the reporting is extremely difficult to manage and it fails to find much evidence when compared to other tools.
rather than take this as a personal attack take it as constructive criticism about the tool, from there it's up to you what you do with it.
Unfortunately, there are those who don't take the time to fully understand and use the software properly and therefore will always throw their hands up in the air and cry 'no contest' at the first signs of trouble. Evaluating any one tool in only one narrow period of time is not a fair evaluation. History is the true metric. I'm sure if you have the opportunity to use our software for more than just a few hours, you might be witness to how we strive to do things better. Bug fixes, updates and new features come with time – time that you obviously were not privilege to.
I don't wish to debate the merits and the functionality of our software any further. You clearly have a subjective opinion on the matter and I simply do not agree with it. Let's leave it at that. I'm not here to flame the fire. I'm just saying it's a moot point.
I stand behind our software and like many other vendors out there, we endeavor to fix problems as they arise and we bend over backwards to service our customers. We do the best we can. We're not perfect but we work tirelessly to innovate and improve our products wherever possible. We're here to help make the world a better place and we're doing it in our own way.
For the record, we're not trying to match our product against other vendors' tools "feature for feature". We're differentiating ourselves and bringing features to the Internet forensics space that no one else is doing. We're doing much more than simply parsing artifacts. We've brought biometric facial recognition and real-time online investigative and domain research capabilities into a single integrated forensic tool. Some might say these are "good things". Some might say these are different.
At the end of the day, we're contributing to the cause in our own way and at our own pace. Let's stop comparing apples and oranges and be happy and move on. D
Internet Examiner Toolkit v5.15 supports iTunes Backups for iOS mobile devices!