Collaborators Sought For Standardization Panel

On the Forensic Focus forum, tootypeg recently posted a call for collaborators on a project about standardisation in digital forensics; specifically, standardisation of witness statements / reports.

Tootypeg is looking to identify and define a panel of members to debate and develop the terminology.

The full text of the forum post follows below; if you are interested in collaborating, feel free to add to the thread or PM tootypeg for more information.

Just wanted to gather your thoughts on a few things and particularly a piece of work I am currently looking into. Basically it’s looking at standardisation but from a report and evidence description point of view. It’s interesting that this was sort of mentioned in the ‘New digital forensics textbook – soliciting suggestions’ thread, but from an evidence misunderstanding point of view.I was wondering if it is possible as a field to develop a standard set of technical language / definitions which can be globally used in all reports. In addition to develop a set of criteria which must be met in order to be able to use such a definition in a court report.

Get The Latest DFIR News!

Top DFIR articles in your inbox every month.


Unsubscribe any time. We respect your privacy - read our privacy policy.

For example, as a field we might define and explain an internet history record as ‘A, B & C’. And in order to be able to use that definition, conditions ‘X, Y & Z’ must be present in the case. I am thinking that this could lead to greater consistency across all cases if every practitioner used it and provide courts with a consistent and known precedent description of different types of evidence which they could become familiar with and get a handle on the conditions surrounding it. It would also potentially stop the potential for misinterpretation of content from inconsistent descriptions.

I dont know if I’m talking rubbish here, but in my head on the way to work, it seemed to make sense. Would be interested to hear thoughts on this, particularly on feasibility and the need for it, and whether anyone might be interested collaborating /working on it if it’s useful?

Read the full thread here.

Leave a Comment

Latest Videos

Quantifying Data Volatility for IoT Forensics With Examples From Contiki OS

Forensic Focus 22nd June 2022 5:00 am

File timestamps are used by forensics practitioners as a fundamental artifact. For example, the creation of user files can show traces of user activity, while system files, like configuration and log files, typically reveal when a program was run. 

Despite timestamps being ubiquitous, the understanding of their exact meaning is mostly overlooked in favor of fully-automated, correlation-based approaches. Existing work for practitioners aims at understanding Windows and is not directly applicable to Unix-like systems. 

In this paper, we review how each layer of the software stack (kernel, file system, libraries, application) influences MACB timestamps on Unix systems such as Linux, OpenBSD, FreeBSD and macOS.

We examine how POSIX specifies the timestamp behavior and propose a framework for automatically profiling OS kernels, user mode libraries and applications, including compliance checks against POSIX.

Our implementation covers four different operating systems, the GIO and Qt library, as well as several user mode applications and is released as open-source.

Based on 187 compliance tests and automated profiling covering common file operations, we found multiple unexpected and non-compliant behaviors, both on common operations and in edge cases.

Furthermore, we provide tables summarizing timestamp behavior aimed to be used by practitioners as a quick-reference.

Learn more: https://dfrws.org/presentation/a-systematic-approach-to-understanding-macb-timestamps-on-unixlike-systems/

File timestamps are used by forensics practitioners as a fundamental artifact. For example, the creation of user files can show traces of user activity, while system files, like configuration and log files, typically reveal when a program was run.

Despite timestamps being ubiquitous, the understanding of their exact meaning is mostly overlooked in favor of fully-automated, correlation-based approaches. Existing work for practitioners aims at understanding Windows and is not directly applicable to Unix-like systems.

In this paper, we review how each layer of the software stack (kernel, file system, libraries, application) influences MACB timestamps on Unix systems such as Linux, OpenBSD, FreeBSD and macOS.

We examine how POSIX specifies the timestamp behavior and propose a framework for automatically profiling OS kernels, user mode libraries and applications, including compliance checks against POSIX.

Our implementation covers four different operating systems, the GIO and Qt library, as well as several user mode applications and is released as open-source.

Based on 187 compliance tests and automated profiling covering common file operations, we found multiple unexpected and non-compliant behaviors, both on common operations and in edge cases.

Furthermore, we provide tables summarizing timestamp behavior aimed to be used by practitioners as a quick-reference.

Learn more: https://dfrws.org/presentation/a-systematic-approach-to-understanding-macb-timestamps-on-unixlike-systems/

YouTube Video UCQajlJPesqmyWJDN52AZI4Q_i0zd7HtluzY

A Systematic Approach to Understanding MACB Timestamps on Unixlike Systems

Forensic Focus 21st June 2022 5:00 am

This error message is only visible to WordPress admins

Important: No API Key Entered.

Many features are not available without adding an API Key. Please go to the YouTube Feed settings page to add an API key after following these instructions.

Latest Articles

Share to...