by Christa Miller, Forensic Focus
When we published our March opinion piece, “When Digital Forensics Vendors Hire Research Talent, Where Does It Leave Research?” the reaction on Twitter — mostly from vendor representatives themselves — was swift and strong. While few other non-vendor researchers joined the discussion, the points participants raised are worth a follow-up.
Is there a tradeoff between resources and profit?
Tweets from Heather Mahalik, Cellebrite’s Senior Director of Digital Intelligence, and Magnet Forensics’ Forensic Consultant Mike Williamson and Forensic Instructor Christopher Vance, all reflect that their jobs afford them the time they need to do deep work that isn’t case-driven and by extension, limited by a timeline or the parameters of their own free time.
“When I was a forensic examiner, there were times where multiple imminent deadlines made it tough to pursue any tangents,” Williamson wrote later in an email, adding that only occasionally did he have the flexibility to go down a “rabbithole.”
“As an examiner in law enforcement, there was a constant push to get through more and more files,” he continued. “For every file you’d take on, there’s a dozen more still waiting to get assigned. How can a lab manager reasonably commit examiner hours to dev or research projects? This was a constant struggle for me (and other coder types in my lab) — and often resulted in me investing my own time off the clock in order to get things done. Now, I’m able to take on research projects as a part of my regular day-to-day and that’s been fantastic!”

Besides time, Williamson additionally cites new access to people “from all sorts of backgrounds with proficiency in just about every discipline you can think of… from law enforcement to corporate, intelligence, and military. The knowledge transfer that takes place just from being around so many experts in different areas is just excellent.” In addition, both Williamson and Vance tweeted about the benefits of their access to new types of cases.
None of this negates the possibility that profit motives drive the research, of course, but therein lies a draw: case-driven research might be just as necessary, but no more interest-oriented than vendor-driven research.
Researcher Josh Hickman explained in an email: “I would expect when a researcher is working for a vendor they are tackling the extremely difficult topics that require considerable effort and resources, both of which independent researchers are lacking.” Hickman points to BlackBag research on APFS encryption and the T2 chip, as well as the Windows Activity 10 Timeline, completed by Dr. Joe Sylve, Director of Research and Development, and Dr. Vico Marziale, Senior Digital Forensics Researcher, respectively.
Hickman may perhaps demonstrate the best of both worlds. Detailing his extensive Wickr research collaboration with Williamson, Mahalik, and her Cellebrite colleague, R&D Team Leader for Decoding Research Or Begam, he wrote: “The research and writing took just over two months. I started working on the general background the week before I left for DFRWS USA 2019, and the blog was posted on September 11, 2019. I spend more time on the write-ups than I do with the actual research/backgrounding.”
Hickman reflected that his blog “would have been really boring” without vendor support. “It would have been limited to ‘This app is forensically a pain’ and where an examiner *might* find message attachments,” he explained.
Does research become a competitive advantage?
Williamson’s Magnet colleague, Director of Forensics Jessica Hyde, maintained that while some of a tool’s aspects — its acquisition methodology, platform, and viewing — remain behind closed doors, plenty of other research — and collaboration with “competing” researchers — is fair game.
Hickman offered KnowledgeC database research by Sarah Edwards, now a BlackBag Senior Digital Forensic Researcher, as an example of independent research that was ultimately rolled into multiple vendor tools. Williamson referenced his own fast .NET zip parsing library and a mass plist viewing tool, together with iOS Photo Vault research and Python scripting. “Instead of just sharing the actual scripts on a blog update, I encouraged people to contact me for them and have had quite a significant number of people reach out,” Williamson explained.
He grants that weighing “community good” and “company good” factors can be challenging. “There’s no doubt there’s an added layer of complexity when deciding to share work publicly vs keeping it internal,” he said. “But I think there are different kinds of constraints everywhere — in law enforcement you can’t always share, either, but for different reasons.”
He believes there’s also a “middle ground”, as, for instance, when a vendor grants a customer access to a beta build. “[This] might allow us to share work that isn’t completely finished without inadvertently sharing IP,” he explained.
Indeed, Hickman pointed out that without the pre-release access Cellebrite granted to the new Wickr support in Physical Analyzer, the Android database portion of the blog post wouldn’t exist. “Prior to their help the database was completely inaccessible due to encryption,” he explained.
(He also acknowledged that since writing the Wickr post, his level of access to mobile forensic vendor tools has changed, with the result that a follow-up post “would not be as balanced or fair as I would prefer.”)
“While the encryption schema of Wickr is the same across the platforms, how it is implemented is different between Android and iOS,” he continued. “Cellebrite had a solution for the Android crypto in a matter of days; however they did not have one for iOS and indicated it may take some time to understand the crypto implementation.”
Hickman tweeted about that, and Williamson reached out. While the iOS implementation was more complex, Williamson’s reverse-engineering work enabled him to figure it out quickly and helped Hickman understand the iOS implementation.
In his email, Hickman acknowledged that the outcome of his research might have been different had he asked for assistance with the Android and/or iOS apps — rather than a general Wickr question that resulted in solutions on just one platform from each vendor.
“There are times when there are different ways to accomplish the same task in DFIR, so it is important to understand how the tool(s) … accomplish a task(s),” he wrote. “If vendors have different solutions for the same problem it would be important for examiners to understand the differences between the tool(s) they use.”
Even so, Hickman added, owing to R&D and intellectual property issues, “vendors may be reluctant to share *how* they are handling certain operations in their tools.” For that reason, he avoided “cross-contaminating” his Wickr research by keeping his communications with vendor representatives high-level, and by relying on the use of known datasets to confirm performance and accuracy.
The relationship between vendors and independent researchers
As a result, Hickman was able to pinpoint examinations where, he said, “…the tools were not providing output for Wickr data I knew was present.” Running his own tests, he discovered a change in the encryption schema, similar to what Alexis Brignoni had encountered.
In her tweets, Hyde acknowledged that the time to perform this level of validation can be hard for many examiners to come by, even when they recognize its necessity. “There is a robust conversation around verification and validation worth having,” she tweeted.
Therein lies the rub: with limited time when there are cases to be made, and when the admissibility standard of a “tested theory or tool” is a moving target in digital forensics, the extent of validation — relative to finding more data — introduces additional risk that some users may see a “one and done” opportunity particularly if pressed for time. Because not every lab adheres to the same standard, there’s no way to know to what extent multiple tools are or aren’t used for validation.
Williamson offers some practical advice for time-strapped investigators for validation practices. “No one reasonably expects every file to be treated like a “career file” with every single molecule of evidence being validated by hand,” he says. “But that isn’t a license to adopt a blanket policy of just accepting what one tool says at face value, either.”
He says factors like the importance of a particular piece of evidence (or device) to the case, whether multiple vendor tools support that particular data source, and the type of data in question and how frequently it is encountered should all factor into the decision to dig deeper.
“If it’s an obscure app, validation might be an excellent opportunity for the examiner to familiarize themselves with it,” he adds. “Sometimes just generally knowing how an app works can lead to important questions being asked of results being reported by a vendor tool. It’s beneficial to be able to say that you’ve actually got experience using a certain app in court.”
Hyde said she and other vendor representatives address this through their external teaching and both company and personal blogs. Echoing Williamson’s observation that vendor work broadens their reach, she says, “This is why we all… preach a toolbox approach… in vendor neutral manners in secondary jobs stressing the importance of the fundamentals that are needed both by practitioners and researchers.”
Independent research can help mitigate the risks of “breakage” of tools owing to changes to app or operating system versions. “I think this sheds light on the importance of understanding the underlying artifact(s), verifying vendor tools are interpreting the artifact(s) correctly, and being able to do manual examinations when they are not,” Hickman wrote.
At the same time, he added, it may also highlight the need for better vendor transparency. “For me, it would be great to have some sort of way to discern the riskier methods from the tried & tested ones… to know the comparative risk (or potential impact to continuity etc.) for each one. Essentially, having full disclosure that a certain technique is risky would be much preferred to not including it at all.”
Additionally appealing, he said, would be “a standard or even guidelines for tool transparency in order to facilitate validation at the examiner level. Vendors of course must continually try to strike a balance between disclosing enough that validation is possible and not being so open that decoding techniques, which have been painstakingly engineered, can be easily deciphered as a result.”
Hickman’s experience highlights the relationship between independent community researchers and vendors. Before Wickr’s encryption scheme change, both Cellebrite and Oxygen had supported the app. Hickman couldn’t say whether his research spurred new support, but was certain with “their respective R&D methods and cadence… they would have eventually discovered the issue either on their own or by someone else (other than me) inquiring.”
Calling independent researchers “part of the DFIR front line” owing to how quickly the industry changes, Hickman said, “Independent research will continue so long as there is some need that is not being fulfilled, whether it be a special one-off situation in which the ‘big name’ tools are not parsing a particular thing, or an examiner creates a tool/set of tools because they cannot afford the hefty price tag that comes with some DFIR tools.”
At the same time, he added: “Vendors contribute to the community through their products…. I know, at times, there are questions about *how* the tools are doing what they are doing (we can make educated guesses based on our understanding of artifacts) and, at times, some vendors are not as forthcoming as some would like, but vendors are free to choose what to share/not share…. In a way, we ‘validate’ their research when we validate their tools.
“The research by both vendors and independent researchers contribute to the aggregate of DFIR knowledge,” Hickman continued. “The goals of vendors and independent researchers are the same: find/access the data, analyze it, interpret it correctly, and present it in a way so it can be used.”
Defining their relationship as “symbiotic,” Hickman said each research approach plays a vital role. “Obviously, vendors have more resources (i.e. money, people, time) than independent researchers, but independent research can be JUST as good or even better, and independent researchers can have unique insights into some new app/platform capability that vendors may not necessarily have.”
Is the DFIR community commoditized?
Williamson says he feels more in touch now than ever before with the DFIR community. “I’m able to set aside time to stay caught up on current events, read blogs, Discord, and social media, listen to podcasts, and liaise with customers working in law enforcement and corporate environments,” he says. “Interfacing with customers is an important part of my current role, and doing it regularly helps me keep a pulse on digital forensics.”
That pulse helps with decision-making around research priorities. “There’s been a number of times where those interactions led me to passing along information directly to our product team in a more direct way than might be possible with traditional support channels,” Williamson says, adding that some low-impact ways to participate in the community include posing questions on the DFIR Discord server — “I think asking for replies in DM works best for this”– or on Google groups. More involved: becoming a contributor on an open source project.
“I think the ‘niche-ness’ of open source in digital forensics may be changing,” Williamson says, pointing to Brignoni’s ALEAPP and iLEAPP tools for example. “These are open source projects with a pretty much household name seeing massive popularity, and it now has contributors from all over!”
For his part, Hickman has recently focused on contributing forensic images to the community — a project with multiple benefits. “When I started generating them I was working in a lab that was ISO 17025 and we needed known, updated data sets from mobile devices to test our tools prior to putting them into service for case work,” he explains.
Learning via AboutDFIR.com that others in the community needed the images for their own efforts in education, training, and research, Hickman opted to share them. “It benefited my employer at the time and the wider DFIR community,” he says. “It also has the benefit of giving me a bunch of things that I can research when I have the time.”
“I think as vendors we can help support the open source community by embracing great work and not being afraid to mention it publicly,” says Williamson, “even if it might overlap in functionality with our own product somewhat. At the end of the day, the more people we have contributing to the development of forensics tools the better.”
At the same time, researchers in both spheres do well to keep in mind that as technology accelerates, new legal, ethical, and even moral questions will arise around privacy, identity, and society. Digital forensics has matured enough to have a “way we’ve always done it” — but while the foundational methods shouldn’t change, how they’re implemented might need to.
This is already borne out with the advent of location data, cloud storage, the blockchain and the distributed capabilities that rely on it, the prospective end of Moore’s Law, and other advances that have introduced new data storage and aggregation realities.
With vendors wielding a disproportionate degree of influence in the community owing to their resources, hiring trusted researchers is more than about R&D; it’s about shouldering the responsibility to rely on scientific leadership to drive all facets of its operation, including its marketing, sales, training, and business goals.
The result: everyone — vendors, researchers, examiners, courts, and private citizens — who can more accurately weigh potential benefits and costs of different commercial and open source methods, questioning together whether the risks are worth the results.