The SANS DFIR Summit, the largest ever SANS Institute event thanks to a record 20,000 registered individuals, took place July 16-17 this year. In a fully virtual format owing to COVID-19 precautions, the Summit, said SANS Institute Fellow Rob Lee in his opening remarks, was able to “engage the community in ways we’ve never been able to do before.”
That included many first-time attendees from the digital forensics, pen testing, and hacking communities, who could finally engage — especially via the Digital Forensics Discord server — in ways previously constrained by time, budgets, and distance.
Forensic Focus couldn’t attend all the talks as the twin track format meant most ran concurrently, but conference registrants can find recordings behind the Summit’s signup wall. Here, we highlight the sessions we were able to attend. Look for additional “focus” articles coming soon on timelines, training the next generation of DFIR professionals, and other topics!
#DFIRforGood
The first keynote of the conference was delivered by Matt Mitchell, whose keynote, “A DFIRent side of DFIR: Forensicating for Black Lives and Other Social Justice Issues,” challenged attendees to work in a different way, moving the needle on the bigger picture or greater cause. Instead of simply going from job to job, or even local volunteer work, finding “bad guys who target nations or children,” he encouraged next-level strategizing as “public interest technologists.”
Mitchell, a hacker, security researcher, and Tech Fellow to the BUILD Program at the Ford Foundation, spoke about the need to recognize the profession’s limits and to apply DFIR skills over a few hours, days, or weeks to a cause you’re passionate about, to “assist building a new kind of forensics” through organizations such as:
- The Center for Long Term Cyber Security at the University of California-Berkeley and its Citizen Clinic, helping students to connect digital and physical crimes
- CitizenLab
- The Open Technology Fund
- MOSS, the Mozilla Open Source Support program
- NGO ISAC
Later during the summit, Lee Whitfield, a SANS senior technical adviser, presented “Just Forensics, Mercifully,” echoing Mitchell’s presentation by describing how the movie “Just Mercy” inspired a deep desire to question: “What happens when we see injustice that intersects with our own chosen field?”
Acknowledging that feelings can often be pushed aside as life gets in the way, Whitfield referred to cases where the right forensic skills could have uncovered exonerating digital evidence sooner but for defendant resources and attorney knowledge. “It isn’t enough to feel,” he said. “We have to act.”
Contrasted with the opportunity and abundance that many in digital forensics are afforded — in particular during the COVID-19 pandemic, where instead of slowdown, practitioners have seen a massive increase in work — Whitfield encouraged his audience not to sit back when other people are in need, but to reduce or eliminate inequitable situations. Find a list of some ways to help at his blog post, “DFIR for Good.”
Improve DFIR diversity, improve DFIR thinking
Balancing inequities is important not just outside of the DFIR community, but also within. Eoghan Casey, author of Digital Evidence and Computer Crime, and Daryl Pfeif, Founder & CEO of New Orleans-based Digital Forensics Solutions, spoke about “Strengthening Trust in DFIR” via a more dynamic, diverse workforce.
The benefits of such a workforce are manifold:
- They can blend lived experiences with scientific principles and methods for a better perspective on how to correct inequities and improve ethics and justice.
- These perspectives can meet the challenges of both ongoing problems, like child abuse, and more recent problems, like ransomware, by addressing evidence that could be missed, or questions that go unanswered, owing to unconscious bias.
- Greater inclusivity can fill the workforce gap, preventing burnout among existing practitioners.
Casey said the DFIR community can use education and more effort towards inclusivity. Online communities, sharing resources, and helping personally, as well as classes and camps — like Pfeif’s Cyber Sleuth Science Lab — can help develop “21st century skills” beyond technology and programming: collaboration, communication, and critical reasoning.
By strengthening these skills from the start, Casey said, digital forensics will mature, building trust in the science by producing better results. For instance, a structured case assessment and interpretation model can help practitioners test their own hypotheses and the strengths of evidence, and protect against both bias and the suggestion of its influence.
Following Casey and Pfeif was SANS instructor Lodrina Cherne, who delivered the third and final keynote, “Learning at Scale.” Questioning how practitioners can solve problems they’ve never seen before — or even ones we don’t (yet) know need to be solved — Cherne advocated for a broad approach including work and disciplines from outside the tech world that can be applied to digital forensics.
This kind of diverse thinking, she said, is possible through collaboration, pattern matching, and self-immersion in data “because you don’t know what you don’t know,” she explained: by living and breathing with the problem in front of you, it becomes possible to find solutions. Cherne offered a framework to this end:
- Define each feature of a problem, plus its constraints.
- Give things meaningful names.
- Leverage symmetry: have you seen a similar problem before? Cherne used examples from warfare, firefighting, and medical fields to show how to step back and get a wider view in order to zoom back in.
- Try describing one object in different ways, including through the use of more inclusive language.
- Draw a picture or map things out, whether it’s a technical or a career question or problem. Cherne recommended mind-mapping via Cyberseek.com.
- Noting that before you can solve at scale, you have to be able to solve for a single case, Cherne recommended simplifying or asking a simpler version of a problem.
- Thinking about or framing your own problems in terms of other people’s experience from different disciplines.
- Read a lot; think about problems and solving, including pattern recognition. Cherne gave an example of how a previous job repairing bicycles helped her connect one kind of problem to another.
- Echoing Casey’s keynote, Cherne spoke about gut checking your solution through scientific thinking and principles.
Incident response & malware analysis
Michael Gough, Principal of Incident Response at NCC Group, delivered the first technical talk of the summit: “You need a PROcess to check your running processes and modules. The bad guys, and red teams are coming after them!”
Gough’s talk focused on “memware” which often doesn’t reside on disk while the system is running. Known as “fileless malware” in some quarters — inaccurately, Gough pointed out, as the code has to “live” somewhere on the system — these processes are possible to detect, and thus be investigated and even hunted like any other forensic artifact.
To that end, the “PROcess” Gough presented involves a regular, built-in routine of hourly, daily, weekly, monthly, and yearly threat hunting through log management and collection of the process command line. This method corrects for time-consuming traditional forensic examinations by relying on sampling, moving toward scalable, less intrusive on-the-fly live system analysis.
Jess Garcia, Lead DFIR Analyst/CEO at One eSecurity, then presented “Data Science for DFIR – The Force Awakens.” Garcia’s fast-paced content covered different levels associated with using data science and machine learning (DS/ML) to a greater extent in incident response:
- “Padawan” or using Pandas to generate file system timelines, dataframes, and series
- “Jedi” or using a tool like Volatility or Kansa for remote triage artifact analysis, together with a tool like Plaso for supertimeline and artifacts analysis
- “Jedi Master” or using a tool like Keras to design, create and train neural networks to visualize intrusions
For an autoencoder that’s good at detecting anomalies, for example, a neural network could be designed to average a lot of normal data provided as input, then find and report an anomaly in its output. DS/ML could also be used to sort data not just to find suspicious executables, but also scale across systems on a network; to find and disable beaconing to a C2 server; or to run predictions and analyze errors or loss to detect intrusions.
Learn more about these and other DS/ML concepts, as well as a DS4n6 python library with cheat sheets, blogs, and tools Garcia released at the Summit, at ds4n6.io.
Rethinking approaches to digital forensic analysis was the topic of “If at first you don’t succeed, try something else,” a talk delivered by Jim Clausing, a SANS instructor and principal member of AT&T Technical Staff. He showed how, attempting to extract unpacked malware, he needed to shift gears and reexamine the question he was asking.
The process was part of what Clausing described as a skills refresher during some downtime — an important use of time to update tool sets and maintain proficiency and good investigative habits. Without an immediate engagement to work on, he and his team selected an interesting-looking malware sample to examine independently.
Part of his research identified defenses the malware used, including code placed to make an analyst’s job more difficult. But his first and second “plans of attack” didn’t work to unpack the code, so Clausing turned to an “old school” tool — CyberChef — to develop a Python script to decrypt encoded strings in a suspicious section called .pe. From there, he was able to extract and unpack hidden executables, including a keylogger.
David Cowen, Managing Director at KPMG and a SANS instructor, and Matthew Seyer, Manager at KPMG, spoke about “Understanding action and artifacts in real-time” or in other words, how actions generate forensic data so that it becomes possible to associate the two.
It can be tedious, said Seyer, to parse artifacts before and after collection and then to run differentials on the data to understand what changed after performing some action. Live monitoring adds to this, showing what files are being touched, registry keys being added, etc. However, seeing changes to complicated binary structures or internals within files or registry values isn’t possible with live monitoring via tools like Process Monitor; something more efficient is needed.
Seyer described how starting with real-time monitoring — listening to event logs and traces, the USN Journal, and the Master File Table (MFT) — makes it possible to match records in real time. From there, by filtering out the noise, an analyst can find patterns. Plugging the data into existing APIs within Windows, Mac, and Linux operating systems is part of this equation. System APIs aren’t practitioner friendly, said Seyer, but tools and libraries exist to facilitate their use.
Liz Waddell brought her perspective as Incident Commander at Talos Incident Response to her talk, “Help! We need an adult! Engaging an external IR team.” Describing what’s needed for additional surge support, Waddell described how to prepare for this contingency before an incident becomes a crisis — especially if litigation is possible.
Starting with spelling out external engagements in the organization’s incident response plan to what goes into choosing an IR team, Waddell then moved to the scoping that needs to take place once the team is on scene, including objectives: a root cause analysis, recovery and remediation, or some other outcome. She cautioned that moving too quickly can risk mistakes and missing what happened.
Prepping for both remote and onsite forensics can involve mapping networks and any logs or other data the external team may ask for. The team’s actual deployment — including the tools they use and the steps they take from start to finish — and establishing command centers were also covered.
In a prelude to his DFRWS-US workshop on the topic the following week, Ali Hadi, Assistant Professor and Cybersecurity Researcher at Champlain College, presented “Long Live Linux Forensics” together with senior digital forensics students Brendan Brown and Victor Griswold.
The presenters used three case studies and scenarios:
- Linux user artifacts. For each operating system version, the presenters described what artifacts to expect around thumbnails, trash folders, and the “recently used” folder. They compared results between their GNOME and XFCE test environments, as well as Freedesktop (XDG) Standards to their findings.
- A compromised Apache web server. In contrast to previous cases, where the team showed how to track down a threat actor using logs and other system artifacts, for this session they showed how to investigate unusual network activity via the Apache Backdoor Module.
- Anonymous processes. Spawned from memory chunks and never found on the filesystem — so they leave no artifacts — these nonetheless behave in memory in such a way as to allow the use of tools to follow the chain of execution. The team described how they had found the artifacts in procfs to be most relevant.
Milind Bhargava, founder of Mjolnir Security, spoke about “Hunting bad guys that use TOR in real-time” — how viewing the communications entering and leaving the Tor network not only gave insights into bad actors’ techniques and malware deployments, but also showed how even TOR’s “untraceable” communications could be followed to learn whether any data was exfiltrated.
Using an attack against a client as a foundation, Bhargava described how his lab setup addressed questions and challenges associated with the TOR-based attack, and then, how he created a custom honeypot script to capture and log all http-based attacks from TOR exit nodes. Filtering the live feed of attacks into subcategories, he discovered that SQL injections were the most popular form of attack.
His research, he said, showed the importance of starting by being proactive and investigating a “trail of breadcrumbs” in real time, using predictive analytics to understand where a threat actor is in the attack process in order to mitigate the attack.
Mobile forensic analysis
Apple artifacts were the main topic of the summit’s mobile forensics talks. First, Jared Barnhart, mobile forensic engineer and principal at Parsons Corporation, presented “Lucky (iOS) #13: Time to Press Your Bets.”
Describing how the checkm8 vulnerability / exploit changed the culture of research by making “jailbreak” access possible to hundreds of millions of iDevices, Barnhart focused on native files and directories, in particular four artifacts and their metadata: photos specific to the facial recognition feature; recoverable deleted images; the personalization portrait; and 3bars, which offers rough wifi location tracking.
Also around Apple iOS, Mattia Epifani, digital forensics analyst at REALITY NET System Solutions, delivered his DFRWS-EU talk, “Forensic analysis of the Apple HomePod and the Apple HomeKit environment,” for a (mostly) American audience.
Epifani described techniques that a forensic examiner could use to extract and analyze room information, music playback and timeline, wifi logs, power logs, the syslog, and other data from an Apple HomePod and a paired iPhone or iPad using the Home App. He also offered an overview of the Apple HomeKit system including the HomePod, Apple TV, or iPad which can all act as a Home Hub used to control and automate HomeKit accessories remotely.
The DFIR Summit concluded with a bit of fun: the DFIRlympics hosted by Mari DeGrazia and Brian Moran, and the annual Forensic 4Cast Awards hosted as always by Lee Whitfield.
Also on Friday, the Summit’s Solutions Track featured presentations from the following vendors:
- Devo, Inc.’s Jason Mical described how to put big data to work in streamlined DFIR analyst workflows including processing multiple memory dumps, correlating investigation evidence into a threat hunt, and building dashboards to visualize live forensic artifacts.
- Palo Alto Networks’ Menachem Perlman talked about avoiding common threat hunting mistakes as investigators seek out stealthy adversaries at scale.
- DomainTools’ Taylor Wilkes-Pierce spoke about profiling threat actors via their choices in hosting and registering domains, using OSINT DNS and infrastructure data to do it.
- ExtraHop’s John Smith described adding a “third pillar” of network detection and response (NDR) to protect against evasion techniques used against existing SIEM and EDR solutions.
- Blue Hexagon’s Balaji Prasad and Arun Raman, together with Microsoft’s Heike Ritter, also spoke about the application of real-time AI to for solving variable threats including 0-hour and 0-day threats via next-gen network and endpoint detection response to accelerate threat hunting, triage, and IR.
- ThreatConnect’s Ian Davison talked about how automation and orchestration could enhance digital forensic artifacts with operationalized threat intelligence.
- Magnet Forensics’ Trey Amick and Curtis Mutter described the acquisition and analysis of Amazon Web Services (AWS) cloud data with the acquisition of S3 Buckets and EC2 Instances.
Keep up to date with the SANS DFIR Summit at www.sans.org!