SANS DFIR Summit 2019 – Recap

by Christa Miller, Forensic Focus

Held in Austin, Texas each summer, the SANS Digital Forensics and Incident Response (DFIR) Summit is known for offering in-depth but accessible digital forensic research — and for its laid-back, fun atmosphere.

This year’s summit, which ran from Thursday, July 25 through Friday, July 26, delivered a balanced menu of tool-oriented “how-to” style talks, artifact talks, and some specific incident response insights. Most of the people in the room on Day 1 said they were first-time attendees, though we met up with a number of returning faces, too.

One captivating new feature of this year’s summit: a graphic recorder, Ashton Rodenhiser of Canada-based Mind’s Eye Creative, who took “sketch notes” to visually represent the talks. SANS Senior Instructor and Summit advisory board member Phil Hagen said he engaged Ashton because conference organizers wanted a visual focal point around which people would come together and discuss the talks after the fact — which is exactly what they did!

In addition, David Elcock, executive director of the International Consortium of Minority Cyber Professionals (ICMCP) was on hand to talk about the organization’s work making DFIR and information security more inclusive for women, people of color, people with disabilities, and members of the LGBTQ+ community.


Get The Latest DFIR News

Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month.

Unsubscribe any time. We respect your privacy - read our privacy policy.


Elcock described the ICMCP’s three core processes: scholarships, networking and mentoring, and career pathing from high school onwards. Part of the organization’s work is to source scholarships, which are underwritten by sponsors such as airlines and banks; and to work with organizations like SANS, which partners with the ICMCP to offer a Diversity Cyber Academy.

Both Rodenhiser’s and Elcock’s involvement reinforced the overall conference theme of community-building, reflected in opening remarks offered by Rob Lee and Phil Hagen. 

Tool Talks

The SANS DFIR Summit’s tool talks aren’t a typical marketing feature set. While they do, like commercial tool webinars or lectures, focus on solving a specific problem, they’re grounded in hours of painstaking research and development. They’re like a blueprint for other practitioners to use as a foundation for their own research.

At this year’s Summit, automation was the key tool theme — reducing workload by allowing computerized processes to run through vast amounts of data to “find evil.” The talks kicked off with a keynote delivered by Troy Larson, Principal Forensic Investigator at Microsoft, and Eric Zimmerman, Senior Director at Kroll Cybersecurity and a SANS Certified Instructor Author.

Their talk, “Troying to Make Forensic Processing EZer,” focused on the implementation of tool writing and how to scale up from tool implementation standpoint using Zimmerman’s Kroll Artifact Parser and Extractor (KAPE) and Larson’s KAPE-reliant EZ triage method as an example.

Zimmerman’s part of the talk relied on his experience developing KAPE, which was released in February as a way to automate key steps of the forensic process. It’s designed as a customizable, extensible toolchain: by Zimmerman’s definition, a set of targets and modules (run process) grouped to reduce data and process to something a human analyst can work with in a thorough, repeatable, scalable, auditable way.

Larson then talked about how KAPE is useful for automating at scale in compromise investigations. The triage method he developed captures a complete disk image as a snapshot at a point in time. In turn, the snapshot can be scanned and its results delivered as structured data for large scale threat telemetry, detection and hunting.

Aimed at people who might actually use KAPE in a SOC environment, Larson’s talk included ideas for people on how to deploy in their own environments, along with the “rich story bed” of triage analytics and ways to organize hundreds of security events, including a graphic analysis of everything a file did or an account touched.

Another “tool talk with broader implications” was delivered later in the day by BlackBag Technologies’ Dr. Joe T. Sylve, Director of Research & Development. Offering a guide to the research & development process, Sylve began by highlighting the importance of research that’s informed by practitioners, because when tool vendors and practitioners aren’t aware of research, it’s useless to the broader community.

At the same time, though, misconceptions about research persist: it’s hard, you lack the right academic qualifications, or skill sets and tools are too disparate. In fact, Sylve told the room, research isn’t any harder than conventional digital forensics, academic qualifications are “no big deal,” and there’s plenty of overlap between skill sets and tools.

With his own research into APFS snapshots as an example, Sylve’s key takeaways included:

  • The best research candidates are those you’re personally invested in, either because of a case, or just because it piques your curiosity.
  • When it comes to applied research, best is to limit its scope by setting attainable goals. Even a small amount of new knowledge is useful; there’s no need to “solve forensics.”
  • Something that doesn’t strike you as interesting doesn’t necessarily mean research isn’t for you; just that your skills might support a different research area.
  • Dead ends will happen! They may not answer your question, but they do help you narrow it down. This is especially true of large, complicated tasks that would otherwise make it easy to get lost in the weeds.
  • Expand your search as dead ends occur or as one method or another doesn’t lead to answers; be flexible as you learn new things.
  • Sometimes research raises more questions than answers. Calling this “good job security,” Sylve discussed the importance of identifying things you didn’t cover — and then telling others what’s coming. This accountability helps you to hit research milestones, or lets others build on your research.
  • Sylve stressed that just knowing something isn’t useful; it’s important to share with others if for no other reason than validation. Even so, publishing a 20-page paper isn’t necessary; you can post to Twitter, a blog (your own or as a guest), DFIR Review, conference presentations, or even test on a podcast like Forensic Lunch to get reactions before presenting more formally. Each path has barriers to access and reach, so find the one that works best for you.

Brian Olson, Senior Manager of Technical Management at Verizon Media, described how even though it’s not a “security tool” per se, the open source Ansible platform came in useful on a live response because of its adaptability. This was especially important during a live incident response to the systems of a newly acquired company, which were found to have a number of vulnerabilities. 

Designed to automate repetitive IT tasks like configuration management, application deployment, and intra-service orchestration, Ansible’s self-documenting, customizable, scalable features enabled Olson’s team to build a repeatable triage playbook, obtaining volatile data, downloading and labeling files, and uniformly collecting artifacts, among other tasks.

They could then perform stack analysis of processes and webdir files and identify interesting network connections. Over two phases, the team was able to patch the hosts to “stop the bleeding” and remove known malware — and ultimately mature their incident response program.

Distributed evidence collection and analysis was the subject of a talk given by Nick Klein, Director of Klein & Co. and a SANS Certified Instructor, and Mike Cohen, a developer with Velocidex Innovations. They provided an overview of Velociraptor, which fills a need for deep visibility of endpoints — surgically examining endpoints not just for current activity, but also historical context in digital forensic investigations, threat hunting, and breach response.

Few tools, they said, offer scalable, network-wide deep forensic analysis, and Velociraptor, a single operating system-specific executable, can work on both client and server. Designed so that users don’t need to be experts, Velociraptor has no database, libraries, or external dependencies, and is highly customizable. 

Klein and Cohen moved from the artifacts Velociraptor could collect on a single system, to how this capability could help hunt for the same artifacts — for example, event logs, selected NT user hives, or specific forensic evidence such as the use of sysinternals tools or keys — across a network. From there, whatever you can hunt for, you can proactively monitor for, including on DNS (which many organizations don’t log for), each USB device plugged into a machine, or even Office macros.

Klein and Cohen stressed that Velociraptor is a work in progress and that they’re seeking feedback from people using it on real world cases. Learn more at www.velocidex.com!

The summit’s final tool talk was delivered by Elyse Rinne, Software Engineer, and Andy Wick, Senior Principal Architect, both a part of Verizon Media’s “Paranoids” information security team. Their talk focused on using the open source full packet capture system, Moloch, to “find badness.”

Moloch’s capabilities complement what you already have. By inserting it between the network and the internet, you can then store the data on machines with sufficiently large disk space to use later for hunting and incident review. In addition, Rinne and Wick talked about packet hunting, or searching for things within packets themselves.

The Paranoids’ future work will include data visualizations, protocols, cloud, and so on. Meanwhile, Moloch has a large and sustained user community, including an active Slack channel and in-person local meetups. In addition, molochON will be held October 1 in Sunnyvale, California. Learn more at Molo.ch.

Forensic Artifacts

The summit’s artifacts talks ranged widely across platforms: Windows, Mac, iOS, Android, and even email. These talks followed the theme of answering questions that may arise during casework.

Windows Artifacts

The first artifacts discussion went in depth on AmCache investigation. Blanche Lagny, a Digital Forensic Investigator with France’s Agence Nationale de la Sécurité des Systèmes d’Information (ANSSI, translated to National Agency for Information Systems Security), described how the AmCache — a Windows feature since v7 — stores metadata.

Although tools like AmCacheParser and RegRipper parse the AmCache, there’s a lack of documentation, and interpretation isn’t as easy as it might appear. Lagny’s published technical report offers this kind of reference.

Covering three different scenarios, Lagny described how AmCache behaves differently across Windows 8, Windows 10 Redstone 1, and Windows 10 Redstone 3, and how because the artifacts keep changing, it’s imperative to look at every file so you don’t miss important information. Overall, AmCache can be considered a “precious asset” in investigations because it stores data about executed binaries, drivers, executables, Office versions, operating system, etc. 

Windows 10 compressed memory was the subject of a presentation by two FireEye Labs Advanced Reverse Engineering (FLARE) reverse engineers, Omar Sardar and Blaine Stancill. They described how modern operating systems compress memory to fit as much into RAM as possible, and why: the system can use multiple cores and perform simultaneous work, and the capability allows flexible kernel deployment.

However, the memory forensics tools Volatility and Rekall couldn’t read compressed pages. Sardar and Stancill described how they integrated their research into both tools, creating a new layer in Volatility and a new address space in Rekall. They rounded out their presentation with an example of how their plugin found critical pieces of information — a new mutex, handles, shell codes, MZ payload signs, and payload strings — about a piece of malware, an orphan file with its own DLL.

Sardar and Stancill hosted the Flare-on.com challenge at Blackhat, and have made their research available on Github — look for Win10_volatility, win10_rekall, flare-vm, and commando-vm scripts. In addition, find a video with added description from Andrea Fortuna.

Apple Artifacts

Bridging the gap between Windows and Mac was a presentation by Nicole Ibrahim, Senior Associate of Cyber Response at KPMG, who focused on MacOS DS_Stores — as she put it, “like shellbags, but for Macs.”

Ibrahim focused on how the existence of this artifact indicates that a given folder was accessed, as well as how access happened, because it requires Finder GUI interaction. The Finder uses .DS_Stores to restore a folder view, of course, but its relevance to an investigation is its indication of how a user interacted with a folder — created, expanded, opened in a new tab, etc.

Ibrahim also went through interesting correlations and caveats including:

  • Window Bounds (point to point of each window corner) – Bwsp: browser window settings if the user moves the Finder window around – treat it like a semi hash to correlate different folder accesses
  • Scroll positions: vertical or horizontal – lets Finder know at what place you were last viewing: what section in folder was user looking at? X and Y axes
  • Trash Put Backs: if you send a file to the trash, name and location (originals) of file sent to trash – so Finder knows where to put restored file; even if moved, original records will follow it
  • Lack of full paths or stored timestamps; at best they provide only a time range
  • Volatile record data so that you have to carve for .DS_Store files, check local snapshots and Time Machine backups, then correlate to other .DS_Store files on disk.

Ibrahim’s DSStoreParser is available at her GitHub repository.

The other Apple artifacts talk, delivered by BlackBag Technologies’ Senior Digital Forensics Researcher Dr. Vico Marziale, shed light on the macOS Spotlight Desktop Search Service. The desktop search on OS X, macOS, and iOS, Spotlight indexes file content and metadata. It’s turned on by default, but is largely undocumented by Apple. Marziale’s talk focused on the metadata store, which he said is in some ways reminiscent of the Windows registry.

Why it matters: the data in Spotlight including message contents, email contents, phone numbers, print activity, location, calendar items, etc. can help you pinpoint specific users doing activities — and use timestamps to reconstruct a timeline. Marziale then described what’s known about Spotlight’s complicated internal structure, including the volume level and the user level.

To get data from Spotlight, you can use CLIs including mdutil, mdimport, mdfind, and mdls. You can also use Spotlight_parser and the brand-new beta Illuminate, a free CLI research tool for parsing Spotlight items.  

Email Artifacts

Arman Gungor, CEO of Metaspike, spoke about the forensic investigation of emails altered on the server. Sharing a real-life scenario, Gungor described how easy it can be to assume that emails are unalterable on a server when in reality, web services and APIs make it easy to modify both message content and headers.

Gungor made the argument to collect metadata not just from messages, but also from the servers on which they’re stored. Server metadata isn’t typically acquired alongside messages, but can contain important clues as to whether a message was altered. When emails are stored in Gmail, it’s also wise to validate message metadata using DomainKeys Identified Mail (DKIM) signatures.

Likewise messages’ neighbors — if possible, the entire folder — which are important for context. For example, manipulated messages might show larger gaps between message unique identifiers (UIDs) and internal date discrepancy from an expected chronology.

Mobile Artifacts

Vehicle forensics has been a topic of DFIR discussion for some time, but largely in relation to built-in systems like vehicle event data recorders (EDRs, or “black boxes”) and separate mobile device forensics helping to determine whether someone was driving while distracted.

However, both iOS CarPlay and Android Auto bring the two together in unprecedented ways, integrating messaging, navigation, contacts, calendars, and other data to augment travel. This integration was the topic of a talk by SANS instructors and authors Sarah Edwards, Forensic Specialist at Parsons, and Heather Mahalik, Senior Director of Digital Intelligence at Cellebrite.

“They See Us Rollin’; They Hatin’” described the convergence — and potential correlations — between mobile and vehicle platforms via research on a jailbroken iPhone X and a rooted Samsung.

  • On iOS, GUIDs can be used to correlate which car is doing what for each device; while device connections require physical access to the device to correlate across those databases.
  • Messages in an Apple system can be dictated through Siri and as a result, correlated through the KnowledgeC database. Other locations in iOS include InteractionsC.db and sms.db. For Android, Google Voice, MMSSMS.db, or other evidence from third party apps and/or logs.db can help.
  • Bluetooth connections or Android Auto Voice directions don’t mean the driver was hands-free; conversely, it can be difficult to prove driver distraction, especially when a passenger could have sent text messages. 
  • Furthermore, a device doesn’t have to be plugged in to show it was in motion; app usage will reflect its own motion, whether it’s in a car, on a bike, or on another form of transit.
  • Devices can be connected to more than one vehicle, so it’s important to correlate events across vehicles.

Forthcoming research includes the overall Android Auto timeline. For other resources around these issues, reference:

Another mobile forensics presentation, “Tracking Traces of Deleted Applications,” discussed how sometimes, what’s not on a device can tell you more than what is there about device usage. Christopher Vance, Curriculum Development Manager at Magnet Forensics, and Alexis Brignoni, a forensic examiner and researcher, talked about how to get data from deleted apps that can help investigators and examiners to triage which devices to focus on.

Mirroring the observation Mahalik made in her presentation regarding Android data “sprinkled everywhere,” Vance and Brignoni talked about the different files and databases that could remain in different places — including in the cloud — following apparent app deletion.

Purchase histories, uninstalled apps, data and net usage, and other locations can all contain pieces; because each contains a specific piece of data, correlating them all can be important. Some retain data for longer time periods than others, and the data retained for each deleted app varies. (This is where research methodology is critical: being able to test how and where apps store their data can vary widely from app to app, along with the data that remains available.)

So, while your chances of getting more data are stronger early on, having multiple places to look improves your chances even after some time has passed. They can be especially important when it comes to building timelines around app purchases, potential installs, usage/connection times, and deletion times. Tools like Brignoni’s Python parser, which can be found at his GitHub repository, can help.

Perspectives on Incident Response

One interesting talk that didn’t fit into the other categories was given by Terry Freestone, a Senior Cybersecurity Specialist with Gibson Energy. Freestone spoke about industrial control system (ICS) incident response. Following on from our article, “The Opportunity in the Crisis: ICS Malware Digital Forensics and Incident Response,” Freestone’s insights provided a great primer to anyone interested in pursuing DFIR and infosec careers in ICS.

Freestone began by bridging from other sectors into ICS based on their similarities, including a limited number of breach scenarios, the need to adhere to standards, and the notion that time is money. However, there are some critical differences, largely owing to the fact that ICS’ uptime and safety systems affect the real world — so that when things go wrong, they can REALLY go wrong.

Basic physical safety is a responder’s first priority when arriving at an ICS facility, and it imbues everything you might do. Freestone emphasized how careful communication with facility workers, including their guidance on the use of personal protective equipment and how and where to travel throughout a facility, is imperative.

In fact, it can help lay the foundation for effective incident response. Interviewing facility personnel to get their version of an incident can be crucial, as they know what their facility’s “normal” is, and their observations could in fact be tied to the digital aspects of an investigation.

The closing presentation on Day 1 was a team-based incident response war game designed and moderated by four Google engineers: Matt Linton, Chaos Specialist; Adam Nichols, Security Engineer; Francis Perron, Program Manager of Incident Response; and Heather Smith, Sr. Digital Forensics and Incident Response at Crowdstrike.

In contrast to last year’s exercise, this year’s incident took place almost entirely in the cloud. Smith told us it was designed this way because there isn’t currently a lot of training available for cloud analysis in the IR world — but responders have to get used to seeing it. Playing out a more difficult scenario, she said, can prepare examiners for where the industry is going and what they can proactively do.

During debriefings, players concluded that cloud is a different beast from what they’re used to. They found they had to prepare differently, with different expertise and tools — for example, seeking persistence mechanisms rather than “going through the front door.”

Each team of about 10 people had multiple 15-minute sessions to investigate different aspects of a cloud-oriented attack, with several rounds of questions to answer. Ground rules included using Google’s modified version of the fire/rescue community’s Incident Command System. 

Teams were further constrained by their own organizations’ maturity, based on a capability matrix including tools like binary whitelisting, antivirus, binary jobs extension blacklisting, host IDS/IPS, etc.

The upshot: preplanning for what to do with this much complexity, including what happens when there could be GDPR considerations, is imperative. Preparing for forensics in the cloud means potentially having API access and tools access, as well as the appropriate playbooks for whatever cloud environment your organization is using.

Special Events

Attendees had the chance to submit requests for Eric Zimmerman to write a brand-new tool by summit’s end. The MFT Explorer was the result of that tool challenge, with Zimmerman releasing the tool for community review on Day 2.  

Also on Day 2, Mari DeGrazia, Senior Director of Incident Response at Kroll, moderated live debates, splitting nine SANS instructors into three teams to take on hot topics in digital forensics, incident response, and even a little bit of pop culture. Network vs. host based evidence, Windows vs. Mac analysis, triage data vs. full disk images, multifactor authentication, whether Eric Zimmerman’s tools work, and the pronunciation of “GIF” were all hotly debated!

Following the debates, the annual Forensic 4:cast Awards were announced. If you aren’t on Twitter and/or you’ve been living under a rock, you can find the results here.

Have you encountered any of these tools or artifacts in your investigations, conducted your own research, or want to discuss any of these topics? Be sure to subscribe to our RSS feed to get links our daily insights, sign up to receive our monthly newsletter, and join in with discussions on the forums. 

Leave a Comment