Join the forum discussion here.
View the webinar on YouTube here.
Read a full transcript of the webinar here.Update: The release of BlackLight 2018 R1 which, when combined with MacQuisition 2018 R1, is the world’s first and only complete end-to-end acquisition, decryption, and analysis solution for the latest Apple File System (APFS). Both MacQuisition 2018 R1 and BlackLight 2018 R1 are now available since this webinar has been recorded. For more information visit BlackBagTech.com.
Ashley: Thank you for joining BlackBag’s ‘Ask the Expert’ webinar series. My name is Ashley Hernandez and I am the Director of Products at BlackBag Technologies. Today we are excited to present the APFS ‘Ask the Expert’ series. We’ve had numerous questions from our customers and Mac forensics users on APFS, and I’m excited to introduce Dr. Joe Sylve, who’s going to be walking us through some of the common questions that we get asked, and also covering what we are going to be providing users in way of support in both our recently released MacQuisition 2018 R1 and the soon-to-be-released BlackLight 2018 R1.
So, I would like to introduce you to Dr. Joe Sylve.
Joe: Hi! Thank you, Ashley. My name is Joe Sylve, I’m the Director of Research and Development at BlackBag Technologies, and today we’ll talk a little bit about the Apple file system or APFS, which is going to be the replacement, or is now the replacement, for HFS+.
So, we’ll cover a broad range of topics, such as why do have to deal with a new file system in the first place, which devices will we find this APFS file system on, what are the features of the file system at a higher level, and how do I end up identifying and work with devices that use APFS. Then, finally, we’ll go into a little bit about what is the implication to us, as investigators, for having this new file system.
A little bit of a history lesson here: Apple’s first major file system came out in 1985, which was HFS. In 1998, they released an entirely new file system, which was based off of HFS, called HFS+. Very recently, at the end of 2007, Apple has transitioned over to a brand-new file system called the Apple File System or APFS. Work started in this in 2014, and it’s intended to support file system across devices – macOS, iOS, WatchOS, tvOS. It’s completely new implementation and has some interesting features we can talk about.
So, which devices will you find APFS on? All iOS devices after 10.3 and above, even if you had a device that had HFS+ on it before, and you upgraded to 10.3, the file system would automatically be upgraded to APFS silently. Any Mac shipped with macOS 10.13 (High Sierra) and above, and some systems that were upgraded to macOS 10.13, specifically machines with solid-state hard drives in them. In these cases, the file system will be silently converted from HFS+ to APFS, even if core storage is enabled. And all devices, going forward, would mostly have APFS on it, and that includes Apple TV, Apple Watch, etc.
The exception to this is fusion drives. It seems that Apple is having some problems with APFS on fusion drives, so fusion drives and traditional hard drives are not automatically updated to APFS upon upgrade.
So, why did Apple decide to have a new file system in the first place? Well, from the information they’ve told us and the information we can gather based off of studying the file system, there is some improved performance. It does offer native encryption, better space management, improved versioning and backup built in, and it supports other modern file system norms like sparse files, nanosecond timestamps, fast directory sizing, and native extended attributes. While HFS+ may have had some or all of these features, they were mostly added on after the fact as an afterthought, and they were kind of hacks. APFS is a brand-new file system that was designed from scratch. It has first-class support for all of these features.
The first feature we’ll talk about is that APFS actually uses 64-bit integers for iNode numbers. Now, while we might have had four billion plus possible files before, now we have nine quintillion, and while we’re unlikely to actually need that many files, forensically, this means for us that there should be, hopefully, less reuse of iNode numbers. Because we have so many iNode numbers available to us, we can see that there would be very little opportunity for those numbers to wrap around and be reassigned.
This was a need just to kind of future-proof the file system, and it also helps support some other features, such as cloning and snapshot capabilities, which we’ll talk about next.
Clones allow instantaneous copy of a file or directory. So, basically, if you try to copy and paste a file using the finder GUI, rather than creating a brand-new file, new metadata will be created, but that file’s metadata will point to the same blocks as the original store. If you change a byte on that file, then those original blocks will be copied [on write], and then you’ll have another copy of the file on the desk.
The original file and the clone contents do use the same blocks, but they’re treated as separate files, and this helps us prevent data duplication, and really minimizes storage use.
We’ll show you an example of this. We have a very small volume with basically one file there, this Tools.dmg. And if you look at the info, you’ll notice that there is a capacity of 104.8 MB and you have 63.7 MB available.
So, if you make a copy of this file, you’ll notice that those statistics don’t change. Even though we now have two files, because those files have identical content, the file system doesn’t change.
Now, as far as we can tell in our research, this only happens when you make a direct copy of a file. If you happen to create a second file with the exact same data as an existing file on disc, that data will be duplicated on the disc.
Ashley: Joe, we do have a question on clones, and one of the questions regarding clones is: Can I tell the difference between the original file and the clone by any of the metadata or other structures?
Joe: To my knowledge, no. The metadata looks pretty similar to any metadata of the file. You could probably do some deep analysis to figure out if any one of these files have … pointing to the exact same blocks, and then, by looking at the creation timestamps and the metadata, you could probably infer which was the original. However, I’m not quite sure of the forensic value of this.
Ashley: Alright. And then, one more question regarding the previous topic, when we were talking about iNodes, and now that we have 64-bit iNode numbers – with the [less reuse] of iNode numbers, the question is, does Apple write the iNode entry upon deletion in any way?
Joe: Not directly. We’ll talk about that a little bit when we get into the [07:54] of the file system. But there is no direct zeroing out of metadata when a file is deleted. Are there any further questions at this point? Or we move on?
Ashley: One more actually. If you make a copy of a copy – so if you make a clone of a clone – is that naming convention still going to be the same? Meaning it would be like ‘tools’, ‘tools copy’, and then ‘tools copy copy’, and does the operating system still handle that the way it would have handled it before?
Joe: Yes. The naming convention is just a side effect of the UI finder. You can rename these clones to anything you like, and they’d still behave the same way. But yes, you could have clones of clones, there’s no difference.
Ashley: Awesome. I think that’s all the questions for now.
The next feature we’ll talk about is snapshots. The file system actually has a built-in backup mechanism, which is nice. Snapshots allow users to restore files on their system to a previous state. They’re basically read-only copies of the APFS volumes, and they’re created instantly. How they work under the hood is snapshots preserve file and metadata blocks at the time of the snapshots, so they’re not released back to the pool when the file has changed or deleted. A user is going to have the capabilities of saying, “I want to revert back to my file system at a specific time,” from where they either took a snapshot manually or through some automated process such as Time Machine. Right now, it doesn’t seem that Time Machine is taking advantage of APFS snapshots directly on external disc, but we can see that they may be going in that direction.
Finally, the final feature we’ll talk about is that APFS has actually built in first-class support for native encryption. Historically, Mac OS’s HFS+ did not come with full disc encryption support. This was not really a priority in the late ’90s.
However, as it became … there was a need for it, Apple had some ways of handling this. They introduced core storage, which was kind of a virtual volume system that basically encrypted everything at the volume level. So, rather than having support for encryption in HFS, they added a layer of encryption around it, so that if you were going to mount the volume itself, the volume would be encrypted. Now, once you’ve mounted the encrypted volume, a synthesized device would be created, which would have a logical unencrypted version of that file system on it. And if you’ve ever had to image max, you may have tried to acquire this device, and this device will give you a complete, unencrypted version of the file system.
On iOS, they decided to handle things differently. It’s still HFS+ but it was a small variant of HFS+ which allowed file level encryption keys and hardware acceleration. So, now none of the file system metadata was really encrypted, but just some of the contents. These file data blocks were encrypted with a per-file key.
Now iOS and [macOSX], if you’re using APFS, has encryption built in at the file system level, first class support, which means that we will actually have all of the metadata as well as the file contents encrypted, and you can encrypt them with any number of different keys.
Before we move into the APFS disc structure and the theory, are there any questions?
Ashley: We did have one more follow-up on clones – I know clones are a popular topic because it’s something now. So, with clones, what happens if you modify the original file? Does the modification reflect in the clone?
Joe: It will not. All of these clones are basically treated as copy-on-write, so if you make a change on any one of these individual files, they are not like [hard links]. If you make a change in any one of these individual files, a different copy of the file’s data will be created and assigned to the file that you made changes to.
Ashley: So you could end up with deltas from the original file in the first copy, and then, deltas for the original file as that file changes as well.
Joe: It’s unclear to me that … if they’re actual deltas or if you’re just completely assigning all of the blocks [new as copy-on-write]. That probably depends on whether some of the [extents] are the same and some are not. More research is needed for that.
Ashley: Great. I think that’s all the questions before we move on to the structure.
Joe: Alright. So, this is going to talk about more of the higher-level structure of APFS. We do have information about the actual on-disc structures, which we can get into if necessary.
The first structure we’re going to talk about is an APFS container. A container is a logical store that can be made up of one or more physical stores. And a physical store is simply a disc or a partition. We’re used to this notion of having one file system per partition, and these file systems would be a fixed size. APFS is a pooled storage system, which means we actually are taking one or more physical stores together, and those physical storage blocks will be shared among one or more whole volumes. The APFS container is going to be identified by its own block device. So you might – even if you only have one physical store, you’ll have a separate block device, which is this logical concatenation of the physical stores. You can image either the physical stores themselves or the APFS container. And only one … if you have multiple physical devices, you really do need all of the physical stores to rebuild the container. And only one of them is going to identify as the container device itself.
So, I mentioned that APFS is a pooled file storage system. What that means is you can actually have very many lightweight volumes, and you don’t have to specify the size of these volumes ahead of time. Previously, if you wanted to have multiple volumes, you’d have to take your disc and partition it. Say, for instance, I’m going to assign 60 gigs to volume one and 40 gigs to volume two. And if I wanted to add a third volume, we would have to repartition, resize all of these different volumes, which actually might lead to some loss of data.
APFS actually has a very lightweight notion of volumes, where volumes only take up as much space as they need. And they grow and shrink as they need. So, as you create files on a volume, they will take blocks from the free space in the container itself, and if you delete these files, these blocks will be eventually released back into the volume pool itself. So, this notion of free space – we no longer have a notion of free space for a specific volume, because a volume is only assigned the blocks that it’s using. The free space is actually going to be shared among all the volumes in the container.
Ashley: I do have a question on this one for you, Joe, which is: Does that mean that a volume space is not contiguous, meaning it could add space and … I know that you mentioned earlier, the kind design for [FSPs], but does that mean a volume’s allocated space could be all across that [FSP]? You know, it’s not how we would have thought with traditional hard drives, where it just grows in kind of [thick] segments?
Joe: Absolutely. A volume space is no longer contiguous. The APFS container is almost a file system in itself, as where these volumes can grow and shrink. And the volume … the blocks that are assigned to a volume will not necessarily be contiguous on disk, and most of the time they will not be. Which is why we no longer are able to read from a specific block device for each volume. There will be block devices created for these volumes, such as if the container was given … the [16:47] was given the notion of being dev/disc1 … you will still get a dev/disc1s0, s1, s2, etc. for each of the volumes. But those block devices aren’t actually readable. And the reason for that is certainly because the blocks are not contiguous on disc.
Ashley: Great. That covers the questions that we have on the volumes that we have so far.
Joe: So, a little bit more of a visual example here, where we can see here we have the storage and we have two volumes in this one container. One is this Macintosh HD and the second here is media. And what you can see here is there is …. the grey space is showing … the space is being used for that specific volume. The white space with the kind of grey lines is showing that space that is not available on our physical store, because it’s being used by other volumes. And then, the white space is showing that this is space that is available to both volumes. So, the size limitation for these volumes to grow is only limited by this space that is not assigned to any other device.
Because volumes are so lightweight, I would not be surprised if we start seeing more and more of them. Right now, when you upgrade to 10.13 and your HFS file system is switched over, you’ll see a notion of four different volumes being created, and these volumes can be mounted at any point in the file system. But because volumes support different levels of encryption, I would not be surprised if you start seeing a different volume for every user on the system, and each of these volumes can actually have different storage keys. That’s not how it’s implemented currently, but because volumes are so lightweight, things may be going in that direction.
Are there any questions about the pooled storage before we go into the forensics implications?
Ashley: Not at this time.
Joe: So, the first obvious implication to analyzing a pool storage file system – it’s going to affect our file carving. While there is still notion of blocks that are allocated and unallocated, again, individual volumes don’t have unallocated blocks associated with them. When you delete a file on a volume, those blocks from that file are released into the pool itself. So, they’ll be released in the container, and it can be used by any number of volumes that need space. So, you can still carve on these unallocated blocks, you can still parse the metadata for the container and figure out which blocks are currently assigned to which [19:41], and you can still do file carving on those.
However, the results that we find, we will not necessarily be able to tie them to a specific volume, because these blocks are now shared, and there’s no longer any metadata that is easily accessible to say which one of these volumes actually released this block back into the pool. Because our volumes are going to grow and shrink as needed.
So, file carving is still a possibility. There does not seem to be any sort of secure deletion of files by default. However, we’re not going to be able to carve per volume, so if we have very many volumes on a store, the data that we’re carving out of is not going to necessarily be associated to a particular volume. And we can see that the more volumes that we have – for instance, if they decide to make a volume for each individual user – this will be more and more of a problem.
Ashley: Joe, are there any boundaries on the block level or is there no block-level boundaries that we have?
Joe: I’m not sure what you specifically mean by boundaries, but block sizes are [4K] in length, and that is the smallest delineation that will be assigned.
So, the pool will never assign a block smaller than [4K], and if you need more space than that, it’ll [assign] multiple blocks.
Ashley: Perfect. That answers the question.
Joe: APFS uses strong copy-on-write semantics for its file system metadata, which means when an object is updated – and by object, I mean one of these 4K blocks – an entirely new 4K block is created and referenced. So, the existing block, if you even update, say, a timestamp for the metadata, it’s not going to change that one timestamp that is existing at the same offset in the file system. It’s going to create an entirely different copy of that 4K block, with the different changes, and then update its references to point you to this new block.
So, this has several forensic implications for us. It means we should be able to still scan for these old objects, which means we might be able to get information, recoverable metadata, and even file contents from older versions of files. However, even trivial operations, because of this copy-write, means we’re overwriting at least 4K on disc. Now, this doesn’t necessarily mean that every time we access a file, because [of file system access timing], we’re overwriting 4Ks on disc. The file system likely has a way of pooling these updates and periodically flushing to disc. But we have that simply deleting a directory of files does overwrite quite a bit of data on disc.
Snapshots – with the advent of snapshots, this means that we’ll have objects from points and times will be locked, which means even if those files are changed or deleted, the old blocks will not be released into the pool for reuse.
So, this makes it possible to reconstruct the entire historical volume state, at the block level, at the time of the snapshots.
Those of you who are familiar with NTFS volume shadow copies, you’re going to get something a little bit more similar to that, where you’re gaining the entire file system, including the metadata, rather than how Mac Time Machine backups currently worked with HFS+, where it’s mostly just different copies of the data, at the file level, and some hard links to reconstruct the state of the [file system].
Ashley: Joe, do we know if snapshots are enabled by default with APFS?
Joe: Snapshots are not something that you have to specifically enable. It’s something that has to be triggered manually by a user. But we have seen snapshots being triggered in some cases, such [23:58] local backups. We need to do a little bit more research to see exactly when they’re triggered automatically and under which conditions. But it’s not something that’s optional, that you have to specifically go in and enable. But it doesn’t also seem to be something that, you know, every week a snapshot is created, as far as we can tell.
Ashley: We have a few more questions on snapshots. How does a user access or restore snapshots? Is there a specific area that they do that with?
Joe: Yes. Unfortunately, we don’t have any screenshots of that, but it can be done using the built-in [24:38].
Okay, moving on – block devices. APFS-created block devices behave a little bit differently than we’re used to. We’re used to having a block device for the physical disc, and a slice device for our physical partitions. We still have those, but in addition, we have a synthesized [like a] block device for a container. So if you have multiple physical stores, if you have multiple physical partitions, you can either image all of these things together, and your software will have to be able to put them together logically.
Or you could image this synthesized block device, which is going to be the logical concatenation of the container. That might be the better approach, especially for fusion drives and other drives that may have multiple physical partitions on them at a time.
There are slice devices created for each of the individual APFS volumes, but they’re not readable. And this becomes more of an issue once we start talking about encryption. Because encryption is now handled at the APFS object level. So, it’s built into the file system, and not just a volume wrapper. With core storage, what we were able to do – where you can mount this volume, and even if your forensic software didn’t have any notion of how to natively decrypt core storage, you could mount the core storage volume, image the logical decrypted partition, and then it’s the same as if you would have had an unencrypted HFS+ file system.
APFS doesn’t work like that. There is no block device which can be read to get the unencrypted version of a volume, so our tools actually need to know how to read APFS-encrypted volumes. We’ve noticed that there are slight differences depending on how this encryption is implemented versus depending on this scenario. And the three scenarios we have identified are: If you just format a new disc and you choose APFS encrypted, and it’s going to prompt you for a password at that point in time, there will be a single volume key created, as well as a backup key. So you have one password and a backup key that you can use to unlock these volumes.
The other option here is if you enable File Vault, it works somewhat differently, where, rather than having this one key, master key for the entire volume, these keys will be wrapped with an additional layer for each individual user on the system, as well as a recovery key.
If you had a system that used core storage – HFS+ core storage, full disc encryption, like File Vault was enabled – and then you upgraded to APFS, the encryption is handled a little bit differently. Rather than having … you still have multiple keys for your levels, but rather than decrypting each of these individual blocks for core storage encryption and re-encrypting them with the APFS encryption, it just uses the same methods of encryption that core storage use.
All three of these scenarios are supported in BlackLight as well as MacQuisition.
So, before we go into what it looks like in the tools, are there any questions about how this might affect you forensically?
Ashley: There is a question that … and I think we’re going to talk about this as we go into how they work in the tools, but MacQuisition does allow imaging of the APFS container or the … [they were referring it to] as the synthesized container. Will BlackLight 2018 R1 now be able to ingest and parse these images?
Joe: Yes, it will. You can ingest either the physical device or the synthesized container.
Ashley: And then I do have one more question. If a file is deleted – this is back to our snapshots – do all the related snapshots get deleted also?
Joe: If a file is deleted and you have snapshots, the old versions of the metadata of that file still exist on the file system. Think what I was trying to explain when my internet connection dropped, is that there’s this notion of B-trees that store the file system metadata. And these B-trees each have objects in them, and these objects have object IDs and version IDs. So, when you take a snapshot, what happens is the old version of that object stays along in the B-trees, and the new version will be created with just an updated version number.
So, the latest version in the file system is only going to show you the newest version of the metadata or those file blocks. If you delete that file, that object metadata might be deleted. However, the old version of that file’s metadata, because you have a snapshot on the system, will remain. Unless that snapshot is deleted, and then, perhaps it would be deleted.
Does that answer the question, Ashley?
Ashley: It does. And I have a few more questions about what’s going to be supported in BlackLight and MacQuisition, but we will cover those as we go through the Mac section. So, I’m going to hold those [30:02] questions after we go through what it looks like in MacQuisition and in BlackLight. Then we’ll cover those at the end. So, go ahead and move forward.
Joe: Great. So, MacQuisition 2018 R1 has been released. I believe it was released earlier this week, or last week, so this is available now. Has full support for APFS encryption. APFS acquisition as well as encrypted containers.
First, I want to mention that you’ll notice that a disk is marked as encrypted here in red. So it seems pretty obvious. You do not need to decrypt these containers to take images, and you do not need to encrypt them necessarily at this stage to bring them into BlackLight, analyze them. But if you would like to use some of the functionality of MacQuisition that allows you to kind of triage and collect logical files, you obviously need to unlock the [31:05]. You can choose to unlock a volume either via the password or the recovery key, and that is true both in MacQuisition and in BlackLight – either one will work.
Then you’ll notice that we’ve successfully unlocked the file. At this point, you should be able to use all the various functionality in MacQuisition you would normally use for triage and logical acquisition. Still got to select our image device to acquire, and everything will work as you are familiar with, with MacQuisition. You’ll notice here, because we’ve unlocked the disk, we’re able to see individual logical files in the address book, in the chat messages, etc.
Again, if you just want to take a physical image of the disk and you don’t necessarily have the password at the time that you’re doing acquisition, that’s fine – you can still acquire the encrypted version, and when you bring it into BlackLight, if you have the password at that point, then we can actually do our file system analysis on it. This allows you to do logical acquisition as well.
Ashley: A couple of clarifications. One is [for emails encrypted into images into] BlackLight, could we do that on a Windows for Mac machine, for the APFS encryption.
Joe: Yes. BlackLight actually has first-class support for APFS, including encryption. We’re not using any file system APIs. We’ve completely reverse-engineered the file system and done our own implementation. So this, everything that I’m talking about on the BlackLight side will work just as well on Windows as it will in Mac OS.
Ashley: Alright. And then another question is: Do we image disk zero or disk one when we’re looking at MacQuisition, or does it even matter?
Joe: It doesn’t matter. You can image either one of those, and they will be adjusted by BlackLight. My recommendation would be if you have a fusion drive, or any other drive that has multiple physical stores, you should image the logical synthesized disk, the container itself. This is going to make you be able to analyze that, in today’s version, 2018 R1 of BlackLight – I say today’s version, it’ll actually be released in the next week or two. However, the initial version will not have support for multiple physical stores. So, in that sense, you’re always going to be able to ingest a synthesized container, so you probably should take that. However, if there are other volumes on the disk that you may be interested in, other physical partitions on the disk, then the physical image may be of use to you as well.
Ashley: Great. And just a note – with MacQuisition 2018 R1, we did add a couple of other features. One is you can now export these to your NPFS-formatted destination drive, and also, for the logical collection, like you’re doing here, you can now actually create a Mac parsed image with it. So, I did want to note those before we move on. I think next is going to be talking about what it looks like and [walks] like.
Joe: Yes. So, APFS support in BlackLight is not going to be entirely different than what you’re used to. The file system itself doesn’t necessarily impact the same sort of file level artifacts that you’re familiar with. So, if you’re upgrading from HFS+, all the same stuff will still be there – FS events, and logs, and all of the goodies that we’re able to parse in BlackLight. However, because of the notion of pooled storage, we did have to make a few changes in our UI that are specific to APFS. And those are the ones I’m going to talk about today.
Here, you notice that we’re bringing in an APFS image. And this is an image of a physical disk. So, we still see all of the different physical partitions, the FAT32 EFI partition, this one actually has a physical NTFS partition. But you’ll notice this new gray box. This new gray box here represents the logical APFS container. So, all of the partitions that you’re seeing inside of here are actually the APFS volumes. And you’ll see that the APFS volumes, you have your individual sizes of these volumes, and these are the sizes that are being used. And you’ll have the one unallocated device, which is going to be the unallocated space [that’ll] allow you to carve on that container itself.
Because the individual APFS pool volumes themselves don’t have any unallocated space associated with them, you’ll notice that all our file carving will be disabled for the individual volumes. However, you can still carve across the container by importing that unallocated APFS partition there.
If you try to ingest an image with an encrypted volume on it, it’ll look much the same. However, the volume will not be selected by default. You’ll notice that next to [36:38] you’ll see it just says that it’s encrypted. And if you click on the little box next to the encrypted volume, what you’ll get is a password prompt that looks similar like this. If there’s a password hint, that will be shown to you, which may help you guess the password itself. And here, if it’s a File Vault-encrypted volume, you can enter the password of any one of the users, any one of them will work, or the recovery key itself. If it’s upgraded from core storage, the same thing applies. And if it’s just APFS-encrypted volume that only has one password, you’ll have to enter the password or the recovery key.
Ashley: Joe, I do have one question about the prompt for encryption. So, we do have encryption support now natively built into MacQuisition 2018 R1, but what if they had imaged an APFS-encrypted drive with MacQuisition prior versions, like 2017 R1, and they have that encrypted disk? Will BlackLight 2018 be able to read those older files and prompt them like we’re seeing here?
Joe: If it is an APFS-encrypted drive and you just have a physical disk image, yes. You should still be able to import it into BlackLight.
Ashley: Thank you.
Joe: This screen here also will give you immediate feedback, whether the disk can be unlocked if you put the wrong password in, you click ‘Unlock’, it’ll prompt you again. We cannot and do not try to import an encrypted volume without the password, because we can get no information from it. Even the metadata itself is encrypted.
And one minor change that we’ve made so far to the UI, it’s just in the details view. It’s a small change but it’s worth mentioning. If you notice here, for non-APFS files, so for instance, this is the EFI partition, which is a FAT32, you’ll see that you get both the space used here and the space available. Because it’s FAT32, there is some free space associated with the volume. But APFS volumes don’t necessarily have free space associated with them, so it didn’t make sense for us to do the same thing.
So, what we do is slightly different – we only show the space used for the volume, and you’ll get a new notion here of how much space is the total size of the whole container itself. That’s a minor change, but it’s just worth mentioning.
So, in review, the APFS file system is found on many upgraded iOS and macOS devices, and all new ones, going forward from September 2017. It does have some new interesting features, like some snapshots and clones and copy-on-write, which may offer us more file history to reconstruct activity once we start doing further analysis. APFS containers can have multiple volumes that share free space and share blocks that grow and shrink. So, items found in unallocated space are going to be harder to attribute to a specific volume and possibly a specific user. And MacQuisition and the upcoming BlackLight release will acquire, decrypt, and process APFS volumes for … well, for BlackLight of course, is Windows and OSX, macOS.
Are there any further questions?
Ashley: We do have a couple. Have you noticed the processing impact when processing an encrypted APFS container, versus one that was decrypted during imaging? And I know you had a note on the side about there’s no real difference between the two, but have you seen anything different when processing, as far as maybe speed?
Joe: There is no difference. The only reason to unlock the disk in the MacQuisition side is if you want to do some of this logical connection or you want to use some of the triage features. That’s just … you’re unlocking it to allow MacQuisition access to the logical data. You’re still taking a physical image of the disk, it’s going to behave no differently in BlackLight [than that]. And that’s because, again, all this data is encrypted at rest on the disk at the file system level. So just simply unlocking it in BlackLight, you’re not imaging anything different than you’d be imaging if that device was unlocked.
This is not the case, for instance, for HFS+ core storage.
Ashley: Great. We do have a couple of questions about the ability to gain access without a password or recovery key, or references to brute forcing. My understanding, Joe, is we don’t have that built into our product. But I don’t know if you’re aware of any other tools that provide that functionality right now that you can mention.
Joe: I’m currently not aware of any tools that provide that functionality. However, my analysis of the file system suggests that this should be very possible. All the information you need to decrypt … to brute force the key is available without the password itself, and you could do some sort of offline brute forcing. However, it does use industry standard techniques, key-wrapping with variable-length iterations [42:10], so the brute forcing will be slow. And I’m not aware of any tools that currently do this. But it’s technically not impossible.
The one caveat for that is the new iMac Pro with the built-in [teacher] chip, that seems to handle things a bit differently, where you’re actually doing some sort of hardware-level encryption as well. Those keys probably cannot be brute forced outside of the device itself.
Ashley: Alright, so I think that’s the time we have for questions right now. There are additional questions that, like I mentioned, if we didn’t get to your question, we’ll make sure to follow up with you after the webinar. I did want to [42:55] if you are interested in the MacQuisition functionality, that is available today. So, 2018 R1 will acquire those encrypted … or in a decrypted state, those MacQuisition evidence … sorry, those APFS evidence files creation for you.
And then, there were many questions about when are we going to get BlackLight 2018 R1 into your [43:19] hands. So, we are working on that diligently. Like I said, it’s in beta right now with a few folks, and we’re putting the final polish on it. We expect to have it out in the next week or so. So, follow up on social media if you want to keep track of that, or if you already own BlackLight, look for an email in the next week or two announcing the release of that APFS support. So, that’ll be full support, you can bring in encrypted images, and it’ll prompt you for the password; or you can bring in unencrypted images created by MacQuisition 2018 R1, both of those are fully supported.
And we’ll continue to look to Joe and his research as we find new and exciting things to report to you guys on APFS. We’ll continue to provide these webinars on topics like this and similar topics. So, if you have a suggestion or request, you can … at Joe or I on Twitter. Mine is ashleydfir, [44:18], and Joe is also available on Twitter for you. I think you are on the last slide, if you want to give your Twitter.[crosstalk]
Joe: Oh, it’s @jtsylve. But Ashley, if they have any further questions, either about the product or technical questions about APFS, what is the best way for them to [ask] those out to this webinar?
Ashley: You can always ask questions on our product support site that’s on our website. And you can also contact us, like I said, directly – you can contact me. And we’re going to be providing blog posts and additional videos, as well as this presentation will be made available online. So, as we collect questions, we’re going to be responding to those and making those available publicly. So you guys will have the best and most up-to-date information on how to work with APFS.
I think that covers quite a bit. Thank you, guys, for hanging with us for almost an hour. Joe, I really appreciate your time in covering all these topics and our questions. And that is going to wrap up our presentation for today. We will be providing a recording at a later date.
So, thank you so much, and have a great day!
End of Transcript