Join the forum discussion here.
View the webinar on YouTube here.
Read a full transcript of the webinar here.Transcript
Geoffrey: Good morning, everyone, and good afternoon, for those of you that it’s the afternoon. Welcome to a seminar [00:11] that we’re having today on Magnet AXIOM 2.0, focusing on using our products to find information faster and build stronger cases. My name is Geoffrey MacGillivray, I’m the VP of Product Management here at Magnet Forensics, and with me today is Cody Bryant, one of the product managers responsible for Magnet AXIOM.
Cody: Nice to talk to y’all today.
Geoffrey: So we’re going to go through a little bit about Magnet, not too much time, and then dive into some of the features and functionality that’s present in Magnet AXIOM 2.0. We’ve got some demos lined up today, so you’ll see some hands-on … you’ll get some demos of the products themselves, so you can see exactly what we’re talking about in terms of features and functionalities in AXIOM 2.0.
Just a little bit on Magnet Forensics, in case it’s new to you. Many of you on the line may know us as the makers of Internet Evidence Finder or Magnet AXIOM. We are a digital investigation software company. We were founded in 2009 by a police officer, a digital investigator that you know – Jad Saliba was that police officer/digital investigator, and he’s very much at the core of how we develop products today. We’re vey focused on law enforcement and very focused on building products that help you do your job better. We won the digital forensics organization of the year in 2017, in the 4:cast Awards. We’re nominated again in several categories, so if you’re interested in voting, we’d appreciate a vote that you can give us in those categories.
We’re also headquartered in Canada, and we’re speaking to you today from Waterloo, which is just outside Toronto. But we have a global presence – we have offices in the US, [02:03] and Asia, [02:05] our users.
The one thing that we always like to talk about is our mission and vision. We really want to help make a difference, and we want to uncover the truth and empower others, and that’s what we try and do through our software. And part of our vision is – well, our vision specifically is to help modernize policing. That’s why we do the technology and software that we do. We want to make sure that we’re equipping law enforcement agencies with the tools that they need to deal with the ever-changing technology landscape and trends. So that’s a big part of who we are, in terms of trying to help you make a difference. And the way that we can best do that is modernizing the way policing is done.
We have, as I mentioned, offices around the world. And the reason why we do that is we support over 4,000 agencies in 93 countries. This is just a [sampling] here, but just to give you an idea that we are focused globally on law enforcement needs, and we’re trying to make sure that we modernize on a global scale as well.
Now we’re going to get into Magnet AXIOM. It’s been over two years sine we launched AXIOM, and this version, Magnet AXIOM 2.0, is the fastest and easiest-to-use version of the product yet. And we’ve got a lot of great functionality in the product. And we’re going to go through some of the features that have been introduced into AXIOM recently, as well as the features that came with the 2.0 release. And once again, if you have questions as we go through this, please put them in the chat, we’ll be fielding questions at the end of the call. So, if you do have questions, please put them in the chat, we’ll get to them at the end of the call. And if we don’t have time to get to yours, we’ll follow up afterwards. So, please post questions as we go through the presentation.
We always get asked about performance. Certainly in the first year of Magnet AXIOM, we focused a lot on the processing performance within the product. And there is good reason for that. When we released AXIOM … AXIOM is doing more than IEF. It’s indexing, it has more functionality that’s included in it. But what we heard from customers was it wasn’t good enough that we were doing more and taking more time, it had to be as fast or faster than IEF. So, a big part of our focus, certainly in that early days, was making sure the processing performance was on par or better than IEF. And we’ve consistently got that performance to be better than IEF on the processing side.
But we didn’t ignore the examination side of the investigation. We know how much time is spent doing examinations. And we are looking at how to improve the performance within AXIOM, and in particular, AXIOM Examine to help speed up examinations.
What some people don’t realize is there’s a lot of different components that we take into account when we do speed evaluation. Jad, our founder, as I mentioned, did a deep dive into this recently, because we were having a lot of conversations with our customers, and he wrote a blog post on them. So, if you’re interested in that blog post, please check out the website, Jad’s got some great dialogue commentary, and videos, talking about performance and where we’re working to … where AXIOM excels and also where we’re looking to improve upon as well.
So, this is a graph that combines a number of examination steps. This could be anywhere from switching views to searching, filtering, and changing to different views such as timelines. So there’s a sort of cumulative score for each of these products. And you can see, if you compare Magnet IEF 2018 to Magnet AXIOM 2018, AXIOM excels in a lot of areas. And I’d encourage you, if you are doing a comparison, just to do a keyword search between the two products, and see the difference in terms of speed and performance there.
When you look at that, that’s what we’re kind of comparing. But we’re not standing still with AXIOM either. If you look at 2017 versus 2018, we’ve made some good performance improvements, an order of magnitude between the version a year ago and today in Magnet AXIOM. Well, we’re not standing still on that. It’s a bit focus for us right now, the team is working hard to improve performance and make Examine even faster. And there’s an area that we’re trying to optimize as well.
We launched AXIOM and [06:47] now we’ve been really trying to optimize desktop performance. That’s been one of the main use cases or main areas in which AXIOM is used. But we’re also hearing more and more that AXIOM is being used to access cases over a network connection or cases that are being stored on a network store. That’s a use case that we’re now trying to optimize and make performance improvements on. So, stay tuned in the next several months, we are going to have more performance improvements landing there. But I want to reiterate that the performance is always an area of focus for us and we’re continuing to look, continuing to try to improve the performance both on the processing side and the examination side.
Now, we’ve come to the first section where we’re going to start to get into some demos. One of the features we’re going to start with is the case dashboard. Now, in conversations with many of you out there, we’ve talked about how the artefact view is great, it’s great to go through evidence, but sometimes it can be a little overwhelming and sometimes it’s challenging to find a place to start your investigation. There’s also been requests to get summary information within the case as well. Not only about what was done, but about the device, and also, summary information about the case, to give you a place to start. So, we took all that together and we introduced a case dashboard view into AXIOM, so a case dashboard explorer. And this is now present in AXIOM 2.0. It will work with previously processed cases as well as new cases, so you’ll get this summary view of the case itself, which gives you good summary information, as well as places to start.
And I’ll turn it over to Cody now to start talking about some of the specifics of case dashboard, and he’ll show you an example within AXIOM.
Cody: Hey guys! It’s Cody Bryant here. As Geoff said, I’m a product manager, working with the team here on AXIOM and our IEF products. So, what we’re looking at right now is Magnet AXIOM Examine, and the case dashboard that we’ve added. This is our new in-app home screen basically, when you launch into a case in AXIOM Examine. And you can see really quickly that we’ve divided the contents into three categories across the top here. So, starting over on the left-hand side, we’ve got our Case Overview section. We start at the very top here, with a case summary notes widget. [Been] talking with a number of folks, had a lot of requests for kind of a spot in Examine where you can either take notes as you’re going through the case or, once you’ve finished your investigation and analysis, have a spot where you can kind of summarize the details and findings for inclusion in a report. I’ll circle back on the reporting more in just one minute, show you how that ties in with the dashboard as well.
But coming down further, we’ve got the case processing details … so any of the scans that you’ve run in Magnet AXIOM Process, you can see all the details as far as description, the dates and the examiner that did the scan, a [10:00] and keep that running history of action that was taken against the evidence as well. Another quick thing that we’ve added in here is if you want to grab some information from the case information file, there’s a number of good things we store in there. We’ll talk about our memory analysis integration in a few minutes. But if you’re looking to reference back on some other information that might be relevant for a report or something, one-click access, you can open up the case information file.
Coming into the middle here, we’ve got a little bit of an evidence overview in terms of the number of different types of evidence sources that you’ve added to your case. In this case, you can see – I think I’ve got about five different evidence sources in my case: a computer, mobile phone, and some cloud evidence, another separate one for just a quick search of a folder and a memory dump.
We give you the ability to … and again, another common request that we’ve heard was having the ability to give a quick description of maybe where that evidence was found, some of the physical characteristics of it or unique identifiers, but also the ability to add a picture of the evidence as well. And I talked about this just a moment before, but these are the types of things that we’re looking to include in the report, and I’ll show that off in a little bit more detail.
There are some additional pieces as we look, especially for mobile devices, some additional metadata that would be valuable. We’re going to continue to iterate on the information and the level of information we’re showing in any of these widgets. So, if at any time, there’s pieces of information that you’d love to see called out on this dashboard, by all means, get in contact with us, we’d love to get that feedback and love to continue iterating on this.
Coming then over to the far right-hand side, we’ve got our Places to Start, or kind of the insights section of our dashboard. We start at the very top, with kind of a summarization of the different artefact categories that we’ve found in a case. So, the ability to go quickly through here and filter down through if I wanted to take a look at what artefact categories might have been found or a summarization on a specific piece of evidence. I can quickly flip there and sort that or filter that, just to see those specific evidence types.
Actionability was a real big focus for us in developing this. We wanted to make it so that if there was information we were showing in here, then you could click on it and interact with it, but have it take you into a relevant, filtered view elsewhere and examine to drive part of your investigation. So, for instance, if I click the Media category here, that would take me directly into the media artefact categories in the column view in AXIOM Examine.
Coming down a little bit further here as well, another nice little summarization widget here, around the tags and comments that have been added to content in a case. We break this into two sections – for tags that have been added by reviewers of the case as well as tags that have been added automatically by our Magnet AI categorization. And we’ll talk a little bit more about that in a few minutes. But you can see the difference in breakdown between the tags that different reviewers or investigators may have added or the automatic categorizations by the program as well. Similar to the above, you can click directly, and if I want to see which six artefacts were tagged with a bookmark, I can click that and go right into a filtered view and start looking directly at that bookmarked content.
As well, further down, we’ve got a number of other widgets, a keyword matches widget again, focusing on that actionability – if I wanted to see any of these keyword matches, I can click them and get taken into that filtered view and examine. As well, for any mobile evidence we find, or most relevant mobile evidence we find, any potential passwords, tokens or usernames in the case that could be used to add additional evidence to the case via Magnet AXIOM Cloud, as well as our profiles capability. So if I wanted to track communications or activity either by a specific suspect or a specific victim, I could do that and really quickly jump into that created profile view.
Relevant as well – and I don’t have an example to show here in this case – but a media categorization for things, for folks that are using Project Vic hash sets or any of your own internal hash sets to do categorization of known media types, will get a summarization, similar to the other widgets you see on screen here, to showcase any known illicit media that was found.
I mentioned the reporting. I’m just going to flip over to that quickly. So you can see here quickly, this is a quick report that I’ve created in our HTML, just from this case. And you can see we’ve added a few additional pieces of information to it. We’ve now added a Case Overview section, and similar in driving from the dashboard – you can see that Case Summary Notes section, where I’ve typed in some comments around the case at hand here, and you can see that running left of Case Processing Details, right in here as well, so that you’ve got that running report of the actions that were taken on the evidence in the case, as well as the evidence overview as well, again taken from that dashboard and including any images or descriptions that you may have given it during your analysis in Examine. You can see the pictures down the right-hand side.
Again, we’re always looking for feedback, and if there’s any additional features or functionality you’d love to see [called out] either on the dashboard or the reporting, we’d love to hear from you and continue to drive that forward.
Geoffrey: Thanks, Cody. The other thing that we’ve been doing a lot of work in AXIOM is being able to ingest images from multiple sources. And we’ll talk a little bit about this in Mobile. But we [16:43] bring together computer, cloud, and memory evidence together. And we have made some significant strides in terms of cloud, recently, in terms of being able to access cloud accounts. We did launch a cloud acquisition and processing version of AXIOM in the fall, AXIOM Cloud, and that’s made steady improvements. And also, we’ve increased the amount of integration with AXIOM significantly. So, if you do pull out [a mobile token] within AXIOM, we can use that to seamlessly link to the cloud account that it’s tied to, and pull down evidence for that account, assuming that you have permission to do so.
One of the focuses in the upcoming release – and I’ll talk about this in the next slide – is memory. We’ve always been able to process memory with AXIOM, but we’ve now integrated volatility. So, I’ll cover that also in the next slide, but again, we’re talking about recovering more data within the product. And certainly, when we recover data, we want to make that recovery as easy as possible. And we’ve had BitLocker support for quite some time – so that’s the ability to bring an encrypted BitLocker image into the product and decrypt it, but we’re expanding that support and adding new types of encryption. McAfee is there, as well as a couple of new encryption types coming in the upcoming release, actually targeted at the end of May. And that’s being accomplished through password integration.
So again, Volatility and Passware are good examples of how, at Magnet, we’re not trying to do everything. We’re trying to partner with best-in-class in certain technologies, and integrate those into the product in a way that makes sense.
Now, memory analysis – this is something that, since we’ve launched [18:25] we’ve got great feedback on. We’ve recognized, as I mentioned in the previous slide, that Volatility was the best-in-class memory analysis tool out there. It’s open-source, it’s accessible through CLI, which meant that it wasn’t always accessible to all users. You had to understand the CLI commands and you had to run it. We thought that was a great opportunity to integrate volatility, contribute to that open-source community – because we do want to contribute back – but also, seamlessly integrate it into AXIOM, so that you can load a memory, process it with our artefacts like we’ve been always been able to do, and we’ve heard positive comments about how AXIOM and IEF can do that. But now we have built-in Volatility plug-ins that appear just like artefacts.
We’ve got 21 to start with. We’ll be looking at more. And they’re just presented as artefacts. So, you can load a memory image and just turn on and off these volatility plugins, just like any other artefact. And then, when it runs, we run multiple instances of volatility in parallel, so we get the result of it as quickly as possible. And finally, I think the best part, to me, is the results that are gathered from these volatility plugins are shown alongside all of the other results within AXIOM. So you get those results integrated with your other results, you can bookmark, tag them, and include them in reports.
So, again, just a nice, seamless integration of Volatility, not only from the processing side, login side, but also from the results side. We’re going to take you to that right now, and Cody’s going to walk you through how you can load a Volatility image and how those results look in AXIOM.
Cody: Alright, so we’re back. We’re in Magnet AXIOM Process this time, we’re on – I’ve elected to create a new case here. I’m going to skip forward into loading of our evidence sources, and you can see right at the top the familiar three platforms that we deal with, computer, mobile, and cloud.
Just to give you a quick overview on the cloud side as well, we touched on that … a number of different platforms that we support today, things like [going down] iCloud, backups, content from iCloud, content from Box, Dropbox, Facebook, a lot of good content from Google, Twitter, and things like that as well, that when you’re processing mobile evidence or you’ve got user credentials, you’ll be able to pull down that where possible, and pull that additional evidence into your case.
I’m going to come back out here, show you the process for loading a memory image, in this case in AXIOM. So, I’ll come into our computer platform here, I’ll select to load evidence. And you can see the new Memory button we’ve added here on the far right-hand side, which takes me into the screen. Today we’ve just got the option to load memory dump file. We’re looking to add some additional functionality here in coming releases, so stay tuned for more to show up in this screen shortly. But if I come in through here, I can pick the actual sample, I’ve got a sample001.bin here. And this is where we start to actually integrate and interact with the Volatility framework itself.
So, I have two options – I can let AXIOM, through its integration with Volatility, try to get a recommended list of image profiles, or, if I want, I can specify that information myself, either if I’ve run the command line, the Volatility command line previously and I have that information, or I’ve run the image previously in AXIOM. So, I’ll let this go ahead here and try and automatically find this. I came up real quick … We’re running the [KDBG] scan command here, trying to pull back some of the profiles that are recommended. You can see in the dropdown here, we’ve got two recommended profiles that came back from the results of that [KDBG] scan, but we also try and show you any of the more detailed information coming back from that command as well.
You can see, in this case, based on the profiles we’ve got returned, it looks like we will get some good results for either the processes or modules coming back. We can add some additional components that … we may have selected the right profile here.
If I hit Next … I’ve now added the memory image to my case, and from there, I can take a lot of the same actions that I would on any other number of pieces [23:12] so you can add, in AXIOM today, if I wanted to add keywords to my search or anything like that. You can go ahead and specify those options.
And to come down here quickly to our Computer Artefacts screen though, you can see that by default we’ve got all of the computer artefacts turned on. As Geoff mentioned earlier, AXIOM has been able to process memory for a while. We can pull a lot of good computer artefacts out of their web-related content, some OS artefacts. But we’ve added this new section here specifically for memory artefacts. And you can see the 21 commands through the Volatility framework that we’ve got supported today. We try and show them with a little bit of a friendly name as well as the actual command name in Volatility for those that might be more experienced and more familiar with that command line tool already today.
Once I’ve gone and done that, I can go ahead and process … I’m going to go back quickly, touch on our Passware integration as well. I’ve got another computer image that I want to load here. So, in this case, I’ve got a sample BitLocker encrypted thumb drive. You can see when I select that image to load in, we can see the padlock there, indicating that there was some encryption detected. When I come through the next screen, we can see that it was BitLocker – that was the type that AXIOM, through its integration with Passware, was able to drive. We’ve got either the ability to decrypt that using the password, if you’ve got that password from the suspect or victim. We also have the ability to start to do some basic cracking through dictionary attack.
So, if you’ve got a word list that you’ve generated, I’d say everyone give the AXIOM word list generator that we’ve released recently [is pretty cool], check it out. You can use it to create word lists from any of your AXIOM cases. You could load that in your attempt to do kind of a dictionary attack to try and crack the password if you don’t have a better option there.
I’m going to come in quickly, switch from Process back into Examine. I’ll start with … I’ll flip into Artefacts view here … I’ll start with some of the memory content. Just click in there, that takes me directly into the memory artefacts category here, and you can see these artefacts here that we’ve got, listed side by side with any of the other artefacts that we’ve found for that case. I’m going to filter down quickly just specifically to the memory dump that I’ve got in my case, apply that filter. You can see the type of information that we were able to pull back from that memory dump through AXIOM. Some web-related content, some browsing content, some OS artefacts, linked files, pre-fetched files, event logs, as well as all the Volatility command information that was pulled out. And you can see that listed side by side with those other artefacts.
We start to see running processes on the machine, if we’re looking to try and prove … maybe a suspect had been using a specific program, they’d been using something like Tor – you can use some of these memory artefacts to be able to start to prove that program usage and show that in the case. A couple of good ones to call out – like I said, process list … [timeliner] as well is a great done, showing an overall timeline of the events going on in memory. That can give you a little bit more insight. And file scan as well, just to show any files that were open in memory, if you’re trying to show usage or attribution there.
Quickly jump back out of the filter here … we mentioned some of those cloud artefacts as well, and I’ll filter back down … but if I come to the cloud artefact category, you can start to see … in this case, I’d actually added the cloud content from our cloud passwords and token fields here. So, we were able to pull some account information from the Nexus 5 we’ve got in the case, and just with a quick right-click, I can use that token and username as new evidence from the cloud, as a cloud source in AXIOM. And I can come here quickly and see … in this case, it was a Gmail account. And I’ve got some Gmail messages, a whole bunch of Google activities, some really good information that could add some additional value to my case.
Geoffrey: Thanks, Cody. That was a great demo.
The next thing we wanted to talk about today is some of the improvements we’ve made recently, certainly in Mobile …. And Mobile has been an interesting area, certainly for Magnet. We’ve always taken an artefact-first approach, even from our origins with IEF. And our initial focus was on computers, but that artefact-first approach [dovetailed] very well into Mobile. Mobile is an artefact-first area. And especially because mobile devices are app-based, and a lot of the techniques that we use in computer – well, I should say all the techniques we use in computer are directly applicable to the artefact approach on Mobile.
So we’re continuing to increase our strength in mobile artefacts and also increase our strength in acquisition. Certainly, the team is more [29:03] allow … sorry, for Magnet AXIOM to bypass passwords in different devices. We’ve got greater support for Samsung Recovery Images, which allow you to bypass passwords. We’ve recently introduced support for LG Bypass Passwords as well as [29:20] password [bypassing], into AXIOM itself. So, again, there’s some nice support coming into the product in terms of Mobile Password Bypass.
The other thing I wanted to say there is our philosophy at Magnet is to use whatever tool or method you can to get an image within the mobile space. Sometimes that means chip-off, sometimes that means using another tool, sometimes that means any method you can get your hands on that actually works. And our tool is constructed so that you can ingest images from other sources.
And we actually get asked this question quite regularly – can you ingest images from other mobile tools? And the answer is simply yes. We have a number of blogposts on our website, showing you how you can ingest images taken by other mobile tools. Because we really want to be agnostic in term of how mobile imaging is done. There’s a blogpost on [Cellebrite, Oxygen, XRY] there, so you can understand how you can take those images and ingest them into AXIOM.
Because we want to make sure that we recover as much information out of a mobile image as possible, recognizing that one of the biggest challenges in mobile is how you get that image. And there isn’t really one size that fits all for how you get that image.
A great example of that is the recent entry into the mobile acquisition space, Grayshift. And we’ve got a demo here talking about how we’ve taken a Grayshift image, and what’s really interesting about that image is they’re finding things that we haven’t seen iOS images in quite some time. So, we’re getting some great information on that. The images are being ingested quite seamlessly into AXIOM and Cody will demonstrate that. So, it’s a great story in terms of: get the image however you can, we’d love to see you bring it into AXIOM. And really, what we’re hearing from people in the field is that we’re finding more and different evidence when you’re parsing that image of AXIOM.
And finally, the last advantage that I’ll say is you can present that image evidence alongside your other evidence [items], be that computer, cloud, memory. And [we] integrate it all into one case.
One final point that I’ll make here is we’ve had [DAF] or Dynamic App Finder in our product for quite some time. It’s been introduced for the last several years, and it’s really been advantageous in terms of finding apps that aren’t part of our standard artefact library. And it looks for patterns within databases on images, whether that be computer or mobile. And then it extracts those databases and adds them to the case. So, even though we may not have official support for an app or a database, you can still find information with that. In the past six months, we’ve made some improvements there. It was originally focused on chat applications, but now we’ve extended that to have more flexibility and try and identify databases that have geolocation, URLs, and identifiers. So, you’ve got expanded support for a wider range of databases that could be of interest to you in the case. So, we’re going to walk through Mobile in particular, with a particular focus on Grayshift, and we’re also going to talk about some of the things that we’re seeing in the Grayshift image that we haven’t seen in some time.
Cody: Alright. So, I’m going to come back out here into AXIOM process in here, come back to the beginning. I’ll just quickly remove that [source] that I’ve got there. This time I’m going to go in through the Mobile button that we’ve got there. In this case, we’re going to just show you how to load in a Grayshift image. So we’ll come in to the iOS one there and elect to load evidence. Find my image. You’ll see really quickly here, we’ve got the three images at the top. Either the backup that Grayshift creates, the full file system dump with memory. I’m going to load in the file system quickly here.
Basically, point AXIOM at that evidence image. You can see it come in here quickly. I’ll click Next.
And at this point, we’ve added that evidence to the case. You can go ahead, like we kind of talked about earlier, set up any of your keywords. If you wan to ignore non-relevant files using any of the [33:50] [SRL] hash lists or tagged files that might have a matching hash value, and do any of your Project Vic files or other hash sets you want to load in here. You can specify that as well. Geoff mentioned Dynamic App Finder. We’ve got a screen here to allow it to run. Set that up. And we’ll come to our mobile artefacts…
By default, we’ve got all of the artefacts, we’ve got support basically checked off here. But at this point, I would go ahead with my evidence source at it. I can go ahead and click and start analyzing that evidence. It’s a pretty simple, seamless process. You don’t have to know anything necessarily in terms of how to decode the information properly or anything like that. AXIOM just handles that for you.
We’ll come back into AXIOM Examine here. In this case, we’ve pre-processed one of those sets of Grayshift images. So, I’ll come in here to our artefacts view. And you can see really quickly I’ve got a bunch of custom artefacts down here at the moment, and we’ll talk about those in a second. But Geoff mentioned some of the artefacts that we were seeing, that we hadn’t seen in some time. Things like iOS Email, that all of a sudden, with the processing of those great Grayshift images, you can get full access to those iOS emails and things like that.
I come down here … Geoff mentioned that all of these custom artefacts here were actually created using our Dynamic Application Finder tool. Got a good one to show you here, around current power logs. In this case, you can see the specific application on the device, some of the screen-on time that it had, if you’re trying to prove usage of a specific application, an encrypted chat app or something like that, and the suspect’s saying that they’ve never used it in their life – being able to pull some of that information back and show that there is definite usage there.
The nice part about the DAF custom artefacts is that once they are created, we save those custom artefacts definitions for you directly into AXIOM Process, so that once you’ve created them, you can run them against any of your future cases, and if we find that same information in a SQLite database, we’ll be able to pull that information out for you and display it along with the rest of the artefacts here, which makes it really powerful, because it just continues to add more and more application support and more and more evidence to your case as you go and use it.
One other quick thing we’re showing here – I’ll flip into our File System view here to look at a couple of things. You can see here quickly, I’ve got my Manifest.plist file … something a lot of folks haven’t known, but when we evolved from IEF into AXIOM, we really wanted to add that deeper view into the raw evidence itself in the file system. We’ve got a number of explorers and previewers here to help you view your evidence. In this case, we’ve got a plist viewer that can help you rip through any of those plists that you might find in an iOS image. But the same as well with SQLite databases and any of the other content types that you’d want to be able to preview in your case here. [So if it was] an unsupported application, you’d still be able to find that evidence and pull it out for your case.
Geoffrey: This is an exciting area here, Magnet.AI. And there’s been a lot of talk about artificial intelligence in many industries. And we’ve done our own investigation here, and we’ve got some great results. And certainly we’ve been very pleasantly surprised and happy with how effective AI has been in certain use cases. A year ago, we launched the first version of Magnet.AI, which was just focused on identifying chat threads that would be representative of an activity called luring or grooming.
With 2.0, we’ve introduced a much more expanded and comprehensive version of Magnet.AI. We’ve expanded the number of chat threads that we’re categorizing. You can see some of them listed on the screen here as well. And we’ve also added support for [38:48] object recognition within pictures. You can see some of the areas that we’re detecting now. CSEM, drugs, weapons, and pornographic images – to go along with the luring, sexual conversations that have been added as well.
So, we’ve expanded the capabilities of Magnet.AI within the product, and we’re looking to expand even further in this area by adding more models for different items. And we’ve already got some items that are targeted in our roadmap. So, if you do have ideas here, please let us know.
The other thing that we’re always looking for is partners that have data that they would like to help us train on. Because a big part of AI is making sure that your training the AI on meaningful data that is representative of the picture, object, or type of activity in chat that you’re trying to identify. So, again, we’re looking to expand this program. The text models are focused on English, but we are looking for partners with other languages. The pictures are great because they don’t affect … they aren’t impacted by local languages. So, again, if there’s any area that you’re interested in us expanding into, please let us know.
And also, if you’re interested in understanding more about what partnering with Magnet in this area is, let us know and we can follow up.
Without further ado, we’ll do another demo here.
Cody: So, I’ll flip back to my case, I’ll come back to my dashboard here really quickly. You can see in this case that I’ve actually gone ahead and pre-processed some content here. But the thing that I wanted to call out is we’ve got these categorized chats and categorized pictures buttons right on the main dashboard. It’s all for easy access. If you’re anywhere else in the case, you can come up to this Process menu, and you’ve got the Categorize chats and Categorize pictures buttons here as well, to launch you into the workflow of actually going ahead and categorizing some of that content.
[40:50] flip quickly here to a specific piece of evidence, I can come in here … one of the first things we’ve done … if I went in here and started categorizing my pictures … you can see that we’ve added the option to have a little bit more granularity on specifically what gets categorized. By default, we’ve got all pictures selected here. But if I’d applied some filters to the case – maybe it’s for a time and date range, let’s say, or a specific piece of evidence that I was most interested in – I could select that and have that go ahead, and categorize just those pictures. If I click Next, I start to see the various AI classifiers that we’ve got added in our case. We’ve got weapon content, CSEM content, nudity, drugs, [as Geoff called out]…
So, if I was going to go ahead and I wanted to look for any weapon content, I could go ahead and click that box, have that turned on. If I want to change the tag, I can go ahead and update that to something that I want to use. And then I basically go ahead and click Categorize pictures, and we kick off the process.
Same of the chat categorization front. And a big piece of feedback that we had from our folks was around not doing it automatically, just make sure that … we weren’t [stepping outside the bounds] of any warrants or scopes of the investigation. We’ve made this now this processing task that again … similarly, if you wanted to select a handful of conversations from specific evidence type or all of them, you can go through here and come in here now, and select which AI classifier you want to run.
Similarly, you can customize the tags that are applied. You’d go ahead, you’d click Categorize chats. And one fact hits off and starts running. We would actually see the status bar update in the lower left corner here, to show the progress that’s been done in that post-processing task. And as results are found, you can click to show those results, get it filtered down to those results, start reviewing those results, and refresh it as the processing continues along.
If I come back out here – and I already ran this on some drug content that was in a case. I can click quickly here to zoom in on my artefacts. So now, you can see that I’m [filtering] on this possible drugs content. I’m in the Pictures artefact category … if I flip over into my thumbnail view, you can start to see that a number of these are definitely indicative of drug content.
The one thing with AI to note is that it’s not a perfect science. We’ve opted, fairly intentionally, with the training of our classifiers, to try and optimize to make sure that we don’t miss any potential evidence So, this means that you will see some false positives showing up in your case, or content that might not be indicative of category. We’re going to continue to obviously work to further optimize the result set that we get back as we go through with more content and more iterations on the training. But we do really want to optimize to make sure that we aren’t potentially missing any evidence in that case.
And as Geoff said, if there’s anyone interested in getting a little bit more of an understanding of how we can work together to train new AI classifiers that might be of value to your investigations and the work you’re doing, please don’t hesitate to reach out. There’s a number of different options and opportunities there that we can walk you through. And we’d love to be able to hear the feedback on how [you think] machine learning and AI can help make your cases go faster.
Geoffrey: Okay, so the last area that we’ll talk about here is connections. Connections were released in the fall of this past year, but we’ve continued to improve upon that, and as we’ve added new functionality into the product, we make sure that it’s integrated with connections itself. The big thing around Connections – and this is where we differed from maybe other link analysis type techniques, is we wanted to show linkages between all the evidence in the case, not just the people. And we link artefacts together so that we can tell the story about the evidence itself.
We’ve often heard that sometimes you can find that smoking gun, you can find that picture, you can find that text that is at the center off the suspect’s activity, but building what happens around that text, that picture, that document is what’s important. How did that picture get to the device? How many times was it opened? How was it shared? Those are key questions that often come up when you’re dealing with CP cases.
So, we wanted to make a way … or a way to visualize the evidence so that you could see how it connects together. And that’s the point around connections. And we’ve got an example here, we’re going to walk through it, on how you enter connections in a use case and how you span that evidence from the connection to you.
Cody: Okay, so we’re flipping back into AXIOM Examine. In this case, a quick scenario we’ve got set up is we’re in a jurisdiction where owls are illegal to trade and buy, and two users have been discussing the illegal trading of owls. We’ve got their computer, we’ve got their mobile device, and we’ve actually got some cloud evidence as well. And we’re looking for some proof of exchange and communication back and forth around those topics.
So, if I come in here quickly to my artefacts view, kind of filtered down set of essentially relevant documents already ahead of time … I’m just going to flip out quickly here, back to my column view. You can see you’ve got a small handful of items pre-selected here that look like they might be relevant. One thing that we’ve done … building connections is a post-processing task, so if you want to leverage the connections feature, you can come up to the top left here, to the Tools menu, and click Build connections. It will kick the process off. We’ve got a fairly detailed progress indication that walks you through how far along the steps are. And once that’s complete, you can see in the details card here, this little triangular icon show up, that we can start to use to enter into the Connections view.
In this case, I’ve got this mynewpet.jpeg picture that I’m interested in seeing where that might have come from, or where it might have gone. So, I’ll click that to start to view any of the connections around that particular filename in the case. So, we’ve got a lot of information that comes back here, which is pretty exciting. And if I just rearrange a couple of things really quickly, just to make it a little easier to consume…
We can see that we’ve actually got this file showing up on multiple evidence sources right away, which is interesting. So, I can see quickly that obviously this file is found on Sarah’s desktop computer. But I can also see quickly that we pulled this image out of some cloud content as well, which is kind of interesting.
I can see that there’s a reference to this particular filename on an F: drive. So, I might want to go back and look for an irremovable media, make sure I haven’t missed anything there. I can also see really quickly that this file was actually transferred by a particular user. So, if I click this node or attribute here that we zoom in on, we can see the matching results that would share any instance or artefact that had this filename being shared by this particular identifier, and then we can see this cloud Gmail message show up here.
So, if I look down and I start to look at my content here, I can see an email message, “Hey guys, check out my awesome new pet,” and I can come down quickly and I can see … I’ve got that attachment name, I can see that this might have been shared with these other individuals in the case. Might want to have a further look there.
As well, we try and link it that and talk a little bit about attribution.
Geoffrey: Sorry, we’re just checking to see if it’s frozen here. We do seem to be frozen on the screen. Okay.
Cody: We’ve tried to tie in as well – we’ve talked about attribution. But being able to tie that in with any of the number of good OS artefacts that we’ve got and we recover in the case as well … things like your [shell bags], your links files, you can see some of the MRU for recent files and folders showing up here as well. We started in this particular example on a filename. But you can see as well and do some quick validation through the graph that with these file hashes, if I hover over the file hashes, I can see that there’s no other links off of that which would indicate that maybe the same file existed elsewhere on an evidence source but with a different filename. Just that there’s … we can validate quickly that there’s not multiple instances of that evidence across multiple different sources there.
Geoffrey: Okay, I think we’ve got a screen frozen issue here, but we’ll continue on. We’re hoping that it’ll unfreeze while we try and work through the technical difficulties, but…
We’re close to end of time, so we’ll follow through the rest of the slides, and hopefully you’ll get the screen back.
Just a few more things: We do have a Magnet user summit coming up in Las Vegas on May 21st. We have a lab, a lecture, and an overview of AXIOM. If you want more details on that, please check out our webpage or go to magnetusersummit.com for further details on that.
At this point, we’re coming up with five minutes left. I know there’s some questions on the line. We can talk to some of those questions. We’re just going to pause here while we get a list of those questions, and we’ll start answering a few of those.
One of the questions is, generally, what’s the best configuration in terms of hardware for performance?
I’m going to separate this into two answers. In terms of processing, we do need … to optimize the processing, you want to have a multicore environment. And what I would recommend is up to 16 cores. You can add more than 16 cores, but the processing … the efficiency that you get from those additional cores sometimes isn’t always worth the cost. You do want a decent I/O connection. So, SSDs help with processing. But probably the most important thing that helps with processing is clock speed. So, CPU speed.
So, if I was going to say, “You want 16 cores, high CPU …”
SSDs are good, but that’s not the … those top two items are probably the most effective ways, in terms of making your processing better. From an examination point of view, you definitely want good I/O. This is where SSDs really come into play. And obviously, good CPU helps. RAM helps as well. And as a rule of thumb for the processing – I’d forgotten about RAM, but it’s usually about two gig per core is what we recommend. On the examination side, high RAM is good, I/O is very important.
And one of the areas that we’re looking to optimize is that I/O [connectivity], you’re using that efficiency, and that’s where the network [connectivity] comes into play.
So, we’re going to go to the next question now.
Geoffrey: The next question is [what is the] best way to add evidence to a case?
There’s a couple of different ways, and one was you add the evidence all upfront. So, you can add multiple computers, multiple mobile devices, multiple memory, multiple cloud, to an actual case upfront. Once the initial processing is done, there’s a couple of different ways you can add evidence.
The one I’ll recommend is if you go to the Process menu within Examine, either … this can be accessed from any explorer. There’s an Add New Evidence to the case. If you click on that box, you can add evidence to the case itself. And that will bring you back into Process, where you can add a secondary scan. And that evidence will then be incorporated into your case.
From the cloud perspective, if we [surface] passwords and tokens for a cloud, you can also go into the Artefacts view, and right-click on those, and add evidence to the case. That’s going to jump you back with that cloud token to add evidence to the case.
So, that’s two examples in terms of adding new evidence to the case once the initial processing was done.
Cody: We have a similar question around how it will look under the top artefacts category when multiple evidence sources are added. As I showed in the demo, in that top artefacts category, widgets have the ability … by default, we show the macro view of all evidence sources and totals in the case, but if you want to view just the specific information found from a specific device, you can filter that down with the dropdown on that artefact category’s widget to just view the evidence recovered from that specific device.
We’ve got another one around whether BitLocker recovery key extraction is supported in the memory analysis as well as password hashes for memory. Those are two that we’re working on pretty actively and are hoping to add to the product release soon. Definitely a huge amount of value in having those, so thanks for the question and the feedback there. They’re definitely ones that we’re looking to add as quickly as possible.
Geoffrey: We’re at the top of the hour now, so we’ve got … we’re running out of time for additional questions. We hope you’ve found this beneficial. If you put a question in, we’ll try and follow up with an answer to it. And again, if a question comes to you afterwards, please follow up. If you’re looking for more information on a connect, connections [57:55] as well. We can schedule demos of that.
Apologies for the technical difficulties at the end, where we lost the demo halfway through the connections portion of the webinar.
So, again, thank you for your time today. Thanks for your questions and your attention. Hope you have a good day.
End of Transcript