Presenters: Drew Fahey, Senior Development Manager, BlackBag Technologies and Justin Matsuhara, Forensic Analyst and Instructor, BlackBag Technologies
Join the forum discussion here.
View the webinar on YouTube here.
Read a full transcript of the webinar here.Justin Matsuhara: Well, good morning, everybody. This is the start of our webinar. I am Justin Matsuhara, a forensic analyst and instructor with BlackBag Technologies. Joining me today is Drew Fahey, our Vice President of product development. This morning we’re going to discuss BlackLight and the recent changes. Drew will be talking about and demonstrating some of the new, cool features we offer. BlackLight is our computer and mobile device forensic software. It is obviously capable of analyzing not only Windows and Mac but also Android and Apple iOS devices. The nice thing about this tool is you have the ability to bring in all those devices under one file, allowing you to compare data across the devices.
It works on either Windows or Mac platforms, and supports [EO1, the DD raw, smart, VMDK] forensic images. As you will see, the tool’s user interface is extremely easy to navigate. Drew’s team of developers do a great job with bringing the data to the forefront, so you as an investigator don’t have to dig to find that critical piece of evidence. We’ve all been there, where you’re just constantly digging around and you just can’t find it. But Drew’s team did a great job on that interface, so that it’s right there for you to see.
While Drew is demonstrating the newest features, you will have an opportunity to ask questions. But to do so, type in your question in the question box, which is about two-thirds down the Go To Webinar screen. At the very end of the demonstration, Drew and I will address the questions you have.
So without further ado, welcome, Drew Fahey.
Drew Fahey: Thanks, Justin. Good morning, everybody. Welcome, fellow [forensicators]. Most of you may know me, I’ve been around the forensics world for some time. I’m very happy to present what we’ve put together for this year for BlackLight 2016 R2. I’m going to be running through a whole bunch of different items that we’ve created for new features in BlackLight. I’m also going to talk about some of the behind-the-scenes things that we’ve done that aren’t obviously as sexy as the new user interfaces and things like that, and I’ll spend some time talking about that.
But I really would like to spend a lot of time at the end also, like Justin just said, answering any questions that you guys may have. I may even give you guys a preview into some of the up-and-coming features that we’re going to be addressing in future releases. So with that said, I’m going to go ahead and kick this off, and give you a demonstration of BlackLight 2016 R2.
Alright, so what you should see here is BlackLight’s interface – this would look very similar to those who’ve been running 2016 R1 for some time. There really has been no difference in this particular UI window. Sorry about that. And what I do want to show those… one of the problems that we’ve had, and it’s completely understandable, is in previous versions of BlackLight, we had the Add Evidence button within the user interface there as well as the File menu to add evidence. One of the problems with that though, that we’ve noticed, was that we had kind of an overabundance of menu options, and it was confusing to many users. And rightfully so, in the sense that if you had, say, and encrypted iOS image that came from a product, say, Cellebrite, or if you had just an iOS backup, or if you had a disk image… we throw a lot around the terms like “image” and whatnot, and that can be very confusing to people, especially new forensic people, who come in and they’ve only been doing the job for three months or so.
So one of the things that we really wanted to do was hopefully try to simplify that interface of bringing in and ingesting evidence. So one of the things that we’ve done is created a whole new ingestion interface for that purpose. Now, as before, there are still multiple ways you can bring evidence into BlackLight, regardless if you’re running it on Windows or running it on Mac, like I am right now. You can simply drag and drop items into your [evidence] window. You can click on the Add button. Both of them will bring up the same window.
So for example, if I wanted to bring in an image or if I wanted to bring in a raw PST file, which we’ll talk about later on, when we’re talking about email, or we wanted to bring in a memory dump or a single file or a Cellebrite image of the iOS or an Android device, it really doesn’t matter. You can take that image, you just drag and drop it on to the window, and it will open up the new evidence ingestion window. The other way to bring this in is to click on the Add button, and it will do the same thing.
What we’ll do is we will remember any of the evidence items that you’ve already added, and one of the biggest differences we have in this user interface here is the fact that before, on previous versions of BlackLight, it was kind of an all-or-nothing approach. So you could only add one piece of evidence, choose your options, and then go. And if you wanted to bring in another piece of evidence, you would have to basically go through that whole process all over again. This new ingestion window allows you to bring in multiple pieces of evidence all at the same time, all with different options.
So as an example, I currently have three different disk drives attached. So this is the drive that BlackLight is running on, these are two different attached external devices. I can click on any of these and I can actually bring these in, if I wanted to bring in that drive itself. This is the image that I drag and drop on to the BlackLight window, and it automatically detected the partitions that make up that particular image. And you can see, there are two automatically selected partitions. These are partitions that we [obviously] recognize and say that there’s going to be potentially more data on there, so they’re automatically checked. And then, over on the right are the ingestion options.
Now, you can globally set those, so if I have my cfreds, which just happens to be a typical forensic test image [sweep], I uploaded that into our evidence window here, and I can choose at a global level the options that I want. So by default, our triage option is set, which basically means it’s going to do the file signature analysis as well as calculate MD5 hashes, as you can see when I open up the window, and it will calculate that based on everything that’s checked.
Well, if I wanted to change those options for the individual partitions, I could do that. So for example, if I wanted to set NTFS here and I wanted to add SHA1 hashes, I could do that. If I actually wanted to do picture analysis on this particular image but not this one, I could do that as well. If I chose not to even bring in this partition for whatever reason, I could do that. And it will remember all of the states. So if I also wanted to bring in another piece of evidence – so let’s say, for example, I want to bring in this raw file, this happens to be PST emails, I can drop that in, and now I can do, again, my own ingestion options, just on that one, single file.
One of the other differences that you’ll notice very quickly, for those of you who’ve been using BlackLight for a while, is before, if we wanted to bring in a single file, we forced you to put that file into a folder and then to bring the folder in. We’ve made a change within the framework that now allows you… you can bring in a single file, and it will be treated just like a file. You no longer have to put it into its own folder.
So I’ll just let this run. As you can see, I’m bringing in a PST file. I actually want to do some advanced options on there. I actually want to parse it as email. Email is one of our… it’s a non-default option, and that’s because a lot of people don’t necessarily want email for particular investigations. So it’s an option that you’d have to choose under the advanced system files, so you can actually see that mail parsing is one of those options. So for that particular one, I do want to choose email, so it’ll actually parse the email and provide it within the user interface when we’re ready to actually look at that.
One of the other options that you notice under the Advanced Options window when I open it is calculating entropy. That’s something new on a lot of our investigators that are looking obviously for encrypted files [unclear] quick glance at that, they can choose the entropy option, and it will calculate that on every file that’s not zero bytes in size.
All the other options that you notice there, most of those have not changed. We have kind of re-factored them and improved upon them. That’s specifically with the file carving, we’re consistently and constantly improving our file carving options for files. Hashes hasn’t changed, as well as the picture and video analysis. So you can choose the options that you want. Again, we saw the templates that we had in the past, and so you can actually see each one is slightly different. So we’ll just go ahead, and I’ll just start that, and it will bring them in. So [you can see as] before, we have our evidence status window that tells us what’s going on, which particular item it’s currently working on, what’s parsing and whatnot. You can see that here.
So that’s really the new ingestion, and again, as a recap, the biggest thing that we have with that is behind the scenes it should automatically detect items. So for example, if you do bring in an encrypted iOS image, it will identify, hey, this is an encrypted iOS image, and as such, you’ll see a lock icon. For example, what I did with this Josh Bennett phone, and I can actually show you an example of that when I go to the Racer partition and I go to the actual intelligence view and I want to see all of, say, the device backups. So these happen to be all of the backups that are within this image file. And I can import any one of these that I want to. So for example, with this iPhone 5, this is the one that I brought in, this one was encrypted. So when I try to bring that back in to BlackLight, it’ll… it has to export it, and it’ll import it, when it does that it’s going to tell you that that particular image is encrypted, and it’ll give you the option to decrypt it.
Now, BlackLight does not do the forced decryption on anything. You will still have to know the appropriate credentials in order to unlock that particular device. So as you can see here, there is a lock icon. And if you hover over it, it’ll tell you, hey, it’s encrypted. And this is going to be true for all different types of images that we automatically detect. You can click on that lock icon, and it will pull up a new window that will show you the ability to enter your password, assuming you have it.
Again, just to stress, BlackLight does not currently brute force any of these encrypted passwords. So you will have to know it in order to ingest it. And that’s the end – so one of the things about the new ingestion window is the fact that it has some smarts behind it. Hopefully we’re not too smart and make it more difficult, but it will automatically do that stuff for you. So you don’t have to go in there and choose the appropriate type any more, to do that for you.
So that’s really the new ingestion window. And again, as a recap, each partition can have its own processing options. There’s a lot of advanced options that you can choose for each one. I will stress that we’ve tested this against some large partitions, especially… not partitions, large images. Especially images like Windows images, that have multiple volume shadow copies. And I will just say that if you load up your ingestion window with five or six images and you’re trying to process each image, and each image contains five or six volume shadow copies, you’re going to really tax your system. And so it could be an interesting case of [a lot of wait] and see what happens.
That said, what I will tell you is: we were a little delayed in getting this release out when we wanted to, but the reason for that is because we started looking at some of the processing issues that were going on behind the scenes in ingesting the evidence, and we actually found areas that we were able to dramatically improve. To give you an idea of dramatic improvement, in some of our test case scenarios, where it was literally taking four days to process, in some instances, with all processing options chosen, we have actually got that down to about four hours. Now, that is extremely significant in fixing a lot of those issues, which is great, but what actually showed that problem was with this new ingestion window and loading up, BlackLight, a whole bunch of different process and options that could be chosen, and so that was one of the things that kind of magnified a problem that we have. And we went back and make sure we delayed the release to actually fix that. And so we’re pretty happy with our performance that we’re running on now. And later on, if we get time, I’ll talk about some of the future of dates that we’re planning for even more performance enhancements.
So on with the ingestion window – you get the benefit of having an extreme performance boost, if you will. So that’s all we have with the ingestion window. One of the other things that we like to pride ourselves here at BlackBag is we really try to listen to our end users as far as what makes their life and their day-to-day job as easy and pain-free as possible. And one of the things that has been an ongoing request for some time – it’s not that we’ve ignored you, we’ve actually tried to put some things in place, and it was a little helpful – but kind of went back to the drawing board to look at it, because more and more people were asking for it. And it makes sense, especially with people who have alt monitors, with the ability to actually do breakout windows within the user interface. So we incorporated that, although we did do it a little bit differently. We don’t have every single window have the ability to break out. But as we started talking to users, what we really found out was that most people just really want to break out one or two extra windows so they can actually put one window on one monitor for scanning through images, and they have another window on another monitor where they can see other data.
So what we came down to, as I’ll show you on this racer partition: if I do a file filter and I want to list… we can just do user-created images as an example on this partition. So one of the things that we did is we took our file content view, which is this lower bar right here, and we made that have the ability to actually break it out into its own pane. So this would be a typical example of what users may do in a real case. We’ve already done this behind the scenes for you, where we’ve calculated all of the system hashes against this partition and now are saying, “show me all of the images that a user had the potential to make.” So that means these are not ones that come native with the operating system. It could be that they’re images that come from third-party apps, unless those are in your hash sets that you’re ignoring. But in this case, this is just one of our filters.
So I’m actually going to close what you guys are used to seeing as the file content view, because I want to maximize my list. So this may be the one window I have on one monitor, and now what I can do is I can break out those panes. So if you look at the hash marks right here that are on the file content window, if I just drag those out, it’s going to open up its own file content view for whatever I selected. So no images, or no item within that image, is selected yet. So it is going to be blank. Now, we don’t limit you, so you can create as many of these as you really want to. For these purposes I’ll just do two or three, and you’ll be able to actually see these. I’ll just resize this one. So now, once I start clicking on a file, you’ll notice that file content views will start updating. So I’ll just pick one of these images that are named by a date stamp, which typically means that they were… in this case I just happened to know this, images that were taken with the phone. So when I start clicking on it, the very first tab is all open, our [hex feed]. So you can actually see there’s some exit data here. So this is in fact a jpeg file. I can go to metadata on one view, so I can see all the file’s metadata. Over here we can do a preview. So if I start clicking down…
Maybe I’ll open another window here and I’ll say… what if there’s location data on some of these? So you’ll notice that they’ll all stay in front. So for each time I choose a file, you’re going to see that it updates within those breakout panes. And again, we don’t limit you, so you can break out as many of these as you want. Obviously, it only makes sense here, for this, that we have six panes, one for each of the tabs that we have in here. Six would be the maximum that I would probably ever do. So that’s what we’ve done.
Now, the great thing about the way I feel that we’ve done this is in every software we’ve used – and not just forensic software, but even things like, say, Photoshop – one of the problems with the breakout windows users generally have is resetting them. So you either have to go to a File menu and say, “Hey, reset the view the way I want it,” or you have to sit there and you have to drag and drop that window into the precise area. And if you don’t get it in the right area, it just doesn’t work. So rather than doing that, we opted to say, “You know what, we’re not even going to do that. If you don’t want these windows any more, you just simply close them.” They go away, and it’s there. And as always, the file content view still exists. So you can just drag this window up, and you will always have that one in place. And you can still break out another one if you want.
So that’s what we did, we called the file content view a breakout window, and it worked really, really well. And again, this will work identically on Windows as it does if you’re running a Mac, like I am today.
So one of the other things that people have been asking for for some time, and it was high crowdsource value basically, meaning that a lot of our customers were asking for it, and we’ve implemented it within this release, is the ability to do multi-column sorting. And there’s obviously several cases where this can become very handy. So as an example, just in this view that I have right here, showing all of what we’re calling user-created images, there could be multiple image types here. As you can see, I already have… there’s some jpegs, there’s png, if I scroll down through this list, you’re probably going to find TIF files. So if I wanted to sort these by the extension type, our content extension – and basically, as a reiteration, content extension means we’ve analysed the file type based on the header information of that file – so as an example, if I go to the hex view, based on the header information, we know that this is in fact a jpeg file.
So if I were to take this particular list and just click on the column header where it filters, and I’m sorting in ascending order all of my content extensions. So you can see, bmp is here, gifs, jpegs are going to be below that, pngs and so forth. Up until this version, we’ve never had the ability to sort within that sort. Now we can. So I can actually now sort… so if I sort based on content extension, and now I can sort based on when the date was created, or a size. So if I hold the SHIFT key down and I say click on size, you’ll now see that we have a double hash there. And so now you can see right away, if I just re-sort this one, based on the bmps, I can now have it with the largest first in a descending order, and I can do that. And you can do that within any of the views within BlackLight. So I know that was something that’s been asked for for some time. We’re very happy to finally bring that to you.
In addition to multi-column sorting, one of the things that people have been asking for for some time is the ability to re-order these columns. And we’ve done that as well. Now, that basically is conducted through a view manager, so you can actually adjust the columns, and when you adjust the columns, you’ll get a window here, and you simply just drag and drop what you want. So if you want the MD5 next to the name, so you can actually see the name and then the MD5 path if you want, you apply the changes, and the columns will now actually shift for you. So that also happens basically in all of the views that provide a list view. So those are the two big things, and we’re pretty happy with 2016 R2 to provide to you, it’s things that you guys have been asking for for some time. And on the surface it seems like it should be easy, and unfortunately sometimes it’s not quite that easy. Like for example, it would be real nice to be able to drag and drop these. It’s something that we’re still looking into to see what we can do to provide that. But this is definitely a great way to do it, and it will remember it, so once you close your case and reopen it, it will remember that hey, I wanted MD5 next to my name, and it will stay that way for all your cases.
So that’s the mutli-column sorting, and then the multi-column shifting. One of the next things I’m going to talk about, which I’m extremely happy for, it’s been a long time coming, this is something that I’ve been wanting to get to for a while now, and that is the ability for offline maps. So one of the problems that we’ve had in the past, and many of you will remember this, is the previous versions… I’m just going to pull up and… because I don’t keep old BlackLight versions around on this particular system. But if you remember, our previous versions of BlackLight, when you had a location data item, this is what you would see. So you’d go to location view or you’d go to the location tab in the file content view, and you would have this locator map, which for all intents and purposes is okay – I mean, you can see that hey, this image is probably somewhere in California, definitely in the United States. But if you’re in a country, say, the Netherlands, that red cross basically covers the entire country, so really, it doesn’t do you any good. Yes, you could go to Google Maps and get a much more detailed view, but that would assume that you’re on an internet-connected system, which a lot of our law enforcement agencies are not necessarily connected to the internet, in some cases.
So we created the ability so when you go to download BlackLight, make sure you download… there’s a couple of additional resources, and you want to make sure you download the offline maps. Because now, when you have location-based data… [unclear] I’ll just go to location view. So all of these are location-based data, which basically means they’re going to have geo-location in them and we should be able to plot those out on a map. So now, when you install the offline map view, this is what you’re going to see. So this is all done offline, you don’t need to be connected to the internet at all. We’ve generated our own maps based on Open Street Maps, and what will happen is they will default to three different zoom levels. So in the upper left hand corner, where you see the United States, that’s a zoom level of 3, and then right below that you’ll have a zoom level of 5, and then to the right of that is a zoom level of 8.
So what this means is regardless of wherever you’re at in the world, you’ll have those three different zoom levels. So here’s Europe, you can actually get an idea – so this is going to give you a much better indication of where a person was at. We do have the ability… if you want actually a higher zoom level, you can even do that and add that to BlackLight. By default, when you install the offline maps, we install zoom levels 0 through 8 using Open Street Maps. By all means, if you want to generate your own zoom levels, up to 14 or 15, if you wanted to go that high, you could definitely do that. Add those tiles to BlackLight, and basically they will proceed.
So just to give you an example, I’m going to take this image file that I’ve just got this morning, just a jpeg someone at the office took. I’m going to add it in, and I’m going to go ahead… I don’t even need to… I can process the pictures, it’s one file, it’s going to be real quick. I just click on ‘Start’. I’ll give you an idea of what I mean. I’ve actually incorporated some higher-level zoom levels for our areas that… obviously, BlackBag is headquartered in San Jose, this image was taken in San Jose. So when I go there, and look, you can actually see the much higher zoom level that we have. So now I can definitively say not only that he’s in San Jose but what street they’re at. And this happens to be where BlackBag headquarters is. So it gives you a pretty good indication of what you can do, and all this, again, is done offline, so you do not need to have an internet connection in order to see these.
In addition, if I tag this file and I want to incorporate it in my report, this is the image that you will see in your report. One of the reasons this particular iteration did not have zoom level features, meaning [unclear] zoom in now, was because that makes it very difficult when we’re trying to say, hey, we want to put it in a report, and ask you the zoom levels. I will preface that by saying that that is something that we are going to be working on, it’s on a long-term roadmap to actually give you even more features and ability with mapping and zoom levels and how that all goes into the report. So be patient with us, that stuff is definitely coming, but we think that this is definitely a win-win feature for everyone, because you’ll be able to see pretty much where people are at when they’re taking pictures with location data.
Okay, so one of the other things that we did is we spent a lot of time working on email in this particular release. Initially it wasn’t our intention, but we got a lot of feedback, [unclear] that email is still of importance to a lot of the forensics investigators. So we wanted to kind of go back, take a look at what we were offering, and we revamped a lot of the email views that we had. So the first thing is we changed the view.
So if I just go to Racer’s view and I go to communication, and I move the window again – sorry about that – if I go to communication and choose email, you’ll notice that our email view has been changed around significantly and cleaned up. So I can actually see, in this case, this is all Josh’s email, I can see his different email inboxes that he had. So if I happen to go to an inbox, here’s all the email that you’ll notice. So this is very similar to the way it was before. I’m just going to sort on the attachment counts here, so you actually see the attachments. One of the things that we’ve done a little bit differently now is you’ll see these tabs here. The reason one of those emails weren’t displaying is those were actually encoded emails, and we aren’t decoding that particular one. But if you go through the emails and see, in the Email tab, we try to give it to you in a view that the user would have seen. So most of the emails will render this way, this is basically kind of what we will call the user view, if you will. So the very basic header information and then what the email looks like. If you go to the Properties you get pretty much all of the header information that we can display to you for that email. Obviously there is the raw source. And then, on the Attachments, you’ll see a list of the attachments. If you click on that, and then go to the preview, you can actually see, now inline, what the attachment was.
So there’s a big departure from the way it was before – you actually had to export that attachment, bring it back into [life] and see what it was, if you could even view it. So again, tagging will work the same way – if you tag the whole email and it has attachments, you will get the option… so for example, if I wanted to tag this email as a new tag, I’m going to get the options for the attachment. And I click in here, and everything that I’ve checked will actually be in the actual tag. So whatever you have will be in that tag and will then obviously go into your report.
The other thing though is that in addition to changing the UI and making it a little bit [unclear] and easier to navigate for email, we [unclear] and actually [unclear] to do Outlook-based email parsing. So that is inclusive of the PST format as well as the OAT format for Microsoft Outlook. And that’s Outlook for both Mac and Windows. There’s differences between the two unfortunately, but that said, we will parse both. Currently, in 2016 R2, I will just reiterate that there’s a caveat to that, in that we will parse basically all the versions of Outlook that we [knew of] for Windows. However, for the Mac side of Outlook, we will currently only parse the 2016 version of Outlook. The older version we were going to go back to, and we’ll actually have that working in another iteration of BlackLight. So currently, the current version on Mac is supported; on Windows, all versions are supported.
So you can see I brought in just a single file PST. If I bring in this… this happens to just be one of the Enron examples, it’s freely available for testing online. So that’s why I’m showing you this. So as you can see, this happens to be one example, and on any of these, you can actually see any of the data the same way. So for example, this Four Seasons doc preview, here’s a doc for… an actual Word doc from that, will give you the preview. So Outlook is going to work the exact same way as all the other email formats that we currently support – which is what we’re pretty happy with.
So those are the kind of major, overall changes within the user interface that you’ll recognize right off the bat. Most of the other things within the user interface have pretty much stayed the same within this particular release. We have gone back and we’ve tried to make sure things are the same throughout all of the views, so that means all of these list views, regardless of in you’re in email, if you come down to the phone and you go to the calls view, all of these list views should basically work and function in the same way. So we’ve spent some time going through and making sure those are all the same, regardless of the view that you’re in. And again – so the main points we talked about for UI changes was the ingestion window, the file content viewer, changes in how you can break those panes out, the multi-column sorting, multi-column shifting, the offline maps and the emails. So those are the big, major changes we did for the user interface in this particular release.
So now, that said, behind the scenes, there were some major improvements in how we do certain things. So for example, in addition to the hashing obviously we did the entropy calculation, and to give you an idea for entropy calculation, it’s very simple, it’s all based on Shannon’s entropy calculations. It’s pretty basic, so you’re basically going to have an entropy level, basically zero through one. So if you have something that’s 0.99 or 1.0, that’s extremely high in an entropy value. So that basically is going to be highly compressed data. For example, in this case, movie files. It could be encrypted zip files, things like that. You have an entropy value less than five, if you’re talking about a three or four, those are probably not as highly compressed or highly structured – so things like key lists are a good example. So that’s basically how the entropy calculation works. You can calculate entropy on any file as long as it’s not zero bytes, and you’ll get a value. So that’s kind of how the entropy calculation works.
One of the other things that we’ve spent some time working on, and it’s kind of important for everyone here, and everyone using BlackLight was… we were using an open-source library for reading EO1 files. We have since changed that, we now have our own internal code for reading and parsing EO1, LO1 files. In addition, one of the advantages of that is we will actually support EXO1s now. In the future we’re going to be even adding more, additional support to that. That’s just information that will come out later, but we’re going to have even more support for EO1 format than we do now.
Basically, you can surmise that what I’m talking about is the ability to write out EO1s – that is on our roadmap, and so we’ll have that kind of capability as well.
So there is a huge improvement with the EO1 capability and the LO1 capability with 2016 R2, in the sense that basically, it’s a lot more robust, with code that we wrote. So you won’t have… some of the memory areas… so if you have highly segmented EO1 images, there were problems in the past with BlackLight in dealing with those. Those issues are now all resolved with our new EO1 reader. So really, a big improvement there.
In addition, one of the things… unfortunately, [unclear] big Android [unclear] that we do offer Android support [unclear] we have for some time in addition to Mobilize. We released Mobilize earlier this year with support for Marshmallow, that same support is now built into BlackLight 2016 R2. So for those of you that are giving in a lot of Android devices that are running Marshmallow, you now have that support within BlackLight, which is a great thing. We did improve and update our drivers. Unfortunately, that’s one of the crazy things about mobile devices, is dealing with all of the different, disparate devices out there, and how they all require, in some cases, fringe drivers. So we try to do the best that we can with validating drivers on all of the systems and the products that we have, that we have for testing. So sometimes, unfortunately, that does fail, and you’ll get a device that just, hey, there’s no driver for it, so you’ll have to go out, unfortunately, trying to find that and make it work. So we’re trying to make that process easier for everyone, and just bear with us, because that is definitely a work in progress. But we do fully support Android Marshmallow in this release, which is definitely a good thing.
And then, as I said before, we had a massive improvement in our Windows parsing time. So for those of you that have been running BlackLight on Windows and were comparing that against running BlackLight on Mac, you will have noticed that there very big disparities between the two, and that we were spending a lot of time working on Mac and making Mac most efficient, and Windows was kind of… you all felt it was probably kind of the red-headed stepchild. And we feel that pain, we felt… we heard your calls about that. So we made it incumbent that we’re making sure that… because we know the majority of you were actually probably running on Windows platforms. So we turned things around to make sure that we are trying to be as efficient as possible on Windows, and not just Mac.
So we spent a lot of time analyzing the data calls we have on our backend, processing options that were happening, what was causing bottlenecks, where that data slowdown was, and that made tremendous improvements in that backend. So for those of you running on Windows, you’ll be very happy to see that with 2016 R2 you should notice some very, very significant performance improvements. So we’re real happy about that, we’d love to hear from you one way or the other on that. Because obviously, it’s good to know that what we’re seeing in our own internal testing is the same thing that you guys are seeing. So please try it out and get back to us and let us know if you’re seeing the same significant improvements that we are, because we’re really happy with it.
And then the last thing I’ll talk about very quickly is memory. We have tried to go out and create one of the best memory parsers that’s available. And we released that a while ago with previous versions of BlackLight, we’ve updated it even more with this release. In addition to the offline map installer, we now have an additional installer for additional memory symbols. So if you grab that… Black comes default and ships with its own default smaller set of symbols. But if you grab the additional symbols pack when you’re analyzing Windows memory, you shouldn’t have any of the issues that people had experienced in the past.
In addition, one of the cool things about memory is that it is, from what I’ve seen and tested, and the feedback that I’ve gotten, it’s one of the fastest memory processers for getting data out of memory that really exists in any of the tools. So for those of you who are doing, say, an incident response type case, or for even forensicators who want to see what people were doing on web, were they going to social media places and things like that, you can get that kind of information out. Obviously, what you’re looking at here is particular processes that were running when that memory dump was made. And that [unclear] in itself may not be super useful to you, but one of the things we also do – and I’ve had this hidden for a while – basically pulling out all of this stuff from within that memory image.
So as an example, if you want to see any of the internet searches that were done from this memory dump, you can actually see that. So in this case, somebody was looking for car news, looking for BMWs – obviously, any of you that have taken our BTT class will have seen this – this is one of our Josh Bennett images. So [it is engendered], but it shows you the power of what you can get out of memory. We’ve made it really simple for users.
So I will caveat the memory issue by saying that right now we are currently analyzing memory from Windows. I will tell you… I can’t tell you exactly what release it’s going to go in and when that’s going to happen, because obviously we keep that somewhat close hold. But we will be working on Mac-based memory analysis, which will provide kind of the same insight that you see here in Windows.
And there is just one more thing I’d like to point out, as unfortunately, in the forensics world, as most of you know, we are always playing catch-up, and Windows, on past Tuesday, just released a great, big update to Windows 10. And unfortunately, we’ve already released 2016 R2 for BlackLight. And of course a couple of days after we’ve done that, Windows goes out and they provided an update with Windows 10, which unfortunately does not allow you to do Windows memory analysis with BlackLight. They changed a lot of the internal structures, and so now we got to go, look at our code, make sure that we know where those structures are and how it’s in place, so we can actually parse that. So we’re looking at that now, and obviously we’ll get an update out as soon as we can.
So I’m going to cut it there – we’re going to leave about 15 minutes for some questions and answers. I’ll just bring this up in case we need to demo something. So what I’m going to do is I’m going to turn this back over to Justin to go over for any of the questions and answers. But again, if we don’t get to all of the questions today, feel free to make sure you write in. And one thing I’m going to tell you is that I love feedback – good, bad, or indifferent feedback is a gift, and it’s hard for us to make any sort of changes without feedback, whether it’s positive or negative. So either way, you guys have an open here with me, so please feel free to give us feedback any time, as much of it as you can, that would be wonderful. So thank you, and I’ll turn this back over to you, Justin.
Justin: Thanks, Drew. So we have a couple of questions, the first being: Does BlackLight have a module where you can apply a decryption key? And this is speaking to corporate computers with full disk encryption. My understanding is MacQuisition is the acquisition tool in which you, at that point, can apply that corporate key to decrypt the data, to allow you to create that image with that decrypted data on a corporate computer. Drew, do you have any more on that?
Drew: You’re right, Justie. Currently, no, there is not a module for BlackLight to support things like, say, [Creeden or Array] and the other ones. That said, I will tell you that that is another priority item we’re getting from customers. So that is something that is on our roadmap. It will be completed in the future – not exactly sure exactly when that’s going to be yet. We move things around quite often. But it is definitely something that is on our priority list. But like what Justie said is, generally, if it’s in a corporate account, that means there’s generally a master key, so if they had an image, you can map that image using various tools. MacQuisition would be an example of one, and you can import that master key there to unlock the image, and then basically make an unencrypted or make that unecrypted volume available to BlackLight to analyze.
Justin: Okay, another question was: Do you now have the capability to get data from a locked, encrypted phone, i.e. a physical or any unlock tool? Currently, we do not have that capability. You will be required to have the encryption key or code to get the data. That’s the unfortunate thing, the way encryption is working now, especially with iOS devices, is very complex, it changes constantly. And even Android, we’re starting to see it. So right at this point, we’re not doing any password cracking per se. So no to that, unfortunately. Drew, do you have anything to add to that?
Drew: Unfortunately, no. I think you’ve pretty much hit the nail on the head. The encryption stuff is definitely very difficult. And we don’t have anything in place to try to brute force that. There have been a few… Obviously, I think everybody that encounters that wants that ability. It’s just not simple, by any stretch of the imagination. It is something that we have requests for. At some point, it may make it in, but I can tell you that even if that is on [the roadmap], it’s definitely not something that’s in the short term, just because of the difficulty level involved with dealing with that.
Justin: Great, thank you. Next question is: Can we use BlackLight to triage a live computer? There is some processing that occurs when you ingest the image into BlackLight. For live triage, we are actually looking at that – it was brought up as a possible enhancement in the future for [ICAC] investigators, the guys who do the crimes against children investigators, to give them the ability to triage a system live. There is a misnomer that with the use of MacQuisition you can do it, but that is incorrect. So BlackLight is the tool that we are focusing on to potentially do live triage in the future, but at this point, we currently don’t have anything out to support that. Anything on that, Drew?
Drew: BlackLight does have kind of a way to do it, but it’s not, in the sense that… the sense that the way the question was worded. You can add… like this is the live system right here. The thing is that BlackLight is a fairly large piece of installed software, so it’s not necessarily… you can run it standalone, so after it’s installed, you can technically put it on a USB key and run it from that. There are certain things that we would need to change in the framework that would better support that. So in this case, I can basically… BlackLight’s running, and it’s running on this Macintosh HD. So I can ingest this live volume if I wanted to. But that’s not necessarily what I think the point of the question was. And Justie, you were exactly right, is that this is something that we have been actively pursuing, as far as being able to triage any system live or dead, within BlackLight. So that way, you can see the file system being presented without having to make a full disk image. So that is definitely a very high priority that we are looking at.
Justin: Great, thank you. We have a question regarding how to add higher zoom levels for the maps. Drew, can you walk them through that if we got the time to do so?
Drew: There are several ways you can do it. I can’t give away our secrets to the kingdom on how we do it, but I will tell you that it is not super difficult. So if you go to Open Street Maps, you can basically generate your own zoom level for whatever map area that you want. If you want to do the entire world, you could do that. For example, we have one that we don’t currently ship with. We might in the future. We have zoom level nine, which gives you a little bit higher than zoom level eight. But if you want to go to Open Street Maps… or you can even install your own map server, in which case, then you can make your own maps, do your own parsing, without having to have a wait period. But if you go to Open Street Maps, there’s all the documentation on there on how to actually generate your own. And when you generate your own, what’s going to happen is you’re going to get PNG tiles. So it’s just individual tiles.
So when you install our offline map pack, it’s going to create this folder structure under BlackBagTech/OpenStreetMap, and then there will be a tiles folder. Now, you’ll notice that there’s nothing in here. You’ll just add your tiles. So if you happen to have zoom level nine, you would put those tiles right into this tiles folder. We actually have our own database of tiles. There’s a reason for that. It’s because when you start generating your own street maps, you’re going to see how very large they get, there’s a lot of duplicate files. And for example, zoom level nine, with just the PNG tiles, I think those are over a gig in size. So we have our own way of basically taking those same tiles and reducing them, and we put them into a single file, which makes it obviously easier to install, and makes it a little bit more portable. But all you have to do if you want to generate your own is basically have Open Street Maps create those PNG tiles for you, and then put them in the tiles folder. BlackLight will automatically recognize those, and it will use them accordingly.
So when you have your own Street Map tiles, you notice that… on this particular one here that I’m showing on the screen… give it a second to update and you’ll see it. It automatically updated the zoom level over here to the higher level that we had, which, in this case, this one is zoom level 13. And that [happened to be] that other file that you saw called Tiles. So if you do download your own, BlackLight will automatically use the highest resolution tile, the highest zoom level tile for this big right-hand-side on your location view.
Justin: Great. Thank you so much. Now, the question about does BlackLight support the EXO1 format? We currently support EO1. [unclear], correct, Drew?
Drew: 2016 R2 does now support EO1 and EXO1. However, it will support both, but obviously, if the EXO1 is encrypted, if it actually has the password in it, you’ll have to supply that. But it does support both now.
Justin: Alright. And we got two final questions. First part of the question is: As far as showing locations on a map for where photos were taken, can you create a single map with multiple locations to show a path someone traveled? We do have the ability to export media that does have geo-locations attached to it. And what we have taught in the past… and unfortunately, with forensic machines that are typically air-gapped, where there’s no connectivity to the internet, it wouldn’t work all that great. But you have the ability to take the file itself that you export, and bring it into a machine that has, let’s say, Google Earth on it.
So for instance, you can select the pictures of interest – let’s say your suspect, you’re trying to track that suspect, and the pictures that they were taking along their path, while they were committing a crime, and you know of those being as part of your investigation, you can export those out to a KMZ file. That’s inherent to Google Earth, and it gets pushed out to the location of your liking, and you will take that over to a machine, let’s say, for instance, your machine is air-gapped. Take it over to one that has some connectivity to the internet with Google Earth, and you can go ahead and bring it to Google Earth, and create document in that respect. You can screenshot it and include, as Drew had showed the single file, to bring in as a document that you can include in with your case file. Anything more on that, Drew?
Drew: That’s exactly right. We have the ability to do a single [KML] export for all location data, so you export that file, and the best way is to import that into like a Google Earth. So if you’re online, put that in there, get that, and then, like you said, bring that out. That said, obviously, we do have some design plans that we’ve been toying with, to do something very similar in an offline capacity, using the maps that we currently have. Obviously, to do that, we’d have to… there’s a lot of behind-the-scenes that we have to do, figuring out, are we just showing, as an example, the United States and the pinpoints on there or is it the entire world? Because if you look at the screen, for example, there’s one point right there, in California in the United States, so if we use that zoom level and pinpoint it, say, a hundred pictures, you very quickly see, on that particular zoom level, it wouldn’t be as effective, because you can’t zoom in. So we have to be able to account for that type of activity in an offline status. So that’s something that we would definitely like to get to, just can’t promise you when that’s going to happen.
Justin: Great, thank you. Next question is: Will BlackLight also work with a new [APFS] file system? And I will defer that to you, Drew?
Drew: I’m assuming this is probably the last question. But there are some major plans in place. BlackLight is going to be, in the very near future, is undergoing some major changes that will incorporate a lot of new file systems. So I’ll just leave it at that. But currently, 2016 R2 does not currently support that.
Justin: Okay, great. And the final question I guess would be how to parse a Cellebrite dump in BlackLight. With my own dealings, trying to bring in Cellebrite, there are a couple of things that we have to consider, is… how was [unclear]… I tried using it with the UFED touch dumps, and it’s unsupported based upon the types of files that are being exported out. We do support Physical Analyzer dumps, and if I’m not mistaken, we have that documented within our manual. Is that correct, Drew?
Drew: All of Cellebrite, whether you’re using Physical Analyzer and if you’re using type 1 versus type 2 – you know they keep changing the way that they’re doing stuff – but all of that is documented in the user guide.
Justin: Okay. Alright. If we don’t have any further questions, we’re going to… let me see if I can do this without screwing things up here. Alright, I’m back. Okay, this will conclude the webinar.
As you can see, there is some contact information there, some upcoming courses. Please feel free to contact us should you have any questions or further information about tools that we offer. As you can see, with the upcoming classes, we’ve got some scheduled out for the rest of the year. The BlackLight tools training classes, there are two-day training classes on BlackLight. Currently, they’re free, and attendees have the opportunity to take the certified BlackLight examiner test at the very end of the two days.
The two courses, the Essential Forensic Techniques 1 and 2 are week-long, in-depth classes dealing with Mac forensics using BlackLight. There is a cost involved, which is cited on our website, so check the website for further on that. Completion of both will give you the ability or at least prepare you to take the Macintosh and iOS certified forensic examiner’s certification test.
We encourage you guys to look at those classes and attend them if they’re available in your area. If not… we try to keep as many local as possible. But obviously, if travel is an issue and you can fill seats, we can always look at arranging some for you to host the training itself. We also offer free trial licenses. They range anywhere from 15 to 30 days. If you’re interested in trying out BlackLight, they are complete, full working copies or working licenses. So you happen to get one, we encourage all of you to provide us some feedback on your experience with BlackLight. Our mission is to continually develop the most powerful forensic tool on the market, so you can easily reveal the truth.
So thank you all for attending, and to all of our brothers and sisters in law enforcement, please be safe out there. Thank you.