Gernot Schmied, an IT civil engineer and court expert, reviews Amped Authenticate, a product from Amped Software designed to uncover the processing history of digital images and videos.
Abstract
This article will take you on a journey into the world of expert witness analysis and testimony and its challenges relating to multimedia evidence. It will do so relying on Amped Authenticate for photo and video analysis and some selected examples.
Legal Proceedings and Expert Witness Work
I am an expert witness in court with a background in information technology and applied physics specializing in multimedia forensics – audio, video, photo, screenshots, streaming and embedded in e.g. PDF or Email. My lab is in Vienna, Austria, and operates within the continental European legal framework. Most of the casework is conducted in Austria, Germany and Switzerland. This framework and especially the discovery procedure is quite different from the US and UK legal systems.
The court largely has freedom of appraisal regarding evidence in the individual case. However, it will surely involve an expert witness when the evidence is questionable, has been challenged, or requires restoration, enhancement, or transcription beforehand due to poor quality.
Expert witnesses must perform their duty “lege artis”, which means competently according to the rules and good practices of the craft, including keeping their knowledge current and, in our field, following the evolution of the state of the art and science and being aware of international standards. Not to forget that multimedia forensics is a best-effort approach with no guarantee whatsoever of pleasing or revealing results.
Multimedia evidence is what I like to refer to as “legal proceedings agnostic”. It can pop up in civil and criminal cases, but also in employment law, divorces or virtually any legal proceeding context imaginable. In an ever-increasing number of cases, multimedia evidence plays a significant role in proving or disproving aspects or providing or annihilating a digital alibi. We all have smartphones, smartwatches, and fitness wristbands with us in abundance and around the clock, either recording automatically or triggered within an instant. This led to law enforcement heavily relying on publicly provided evidence and mass/batch processing due to the huge number of recordings provided, which leads us to the next chapter.
Synthetic Content and Deepfakes
For the time being, we could rely on what is referred to as “judging evidence by personal inspection”, such as watching, looking at or listening to by use of our sensory system. Occasional challenges usually revolve around parties claiming tampering, manipulation or alteration, which at least must be of substance and plausible within the context of the specific case.
With the rapid evolution and variety of synthetic content aka “deepfakes”, all this wonderful system of “evidence by personal inspection” is shaken to its core and caught the legal professions off guard. We can no longer trust what our senses tell us, what we hear or see. We can no longer do that and have a closer look only when somebody articulates substantial doubt or it just does not feel plausible in the context of an individual case.
We need a paradigm shift to do a routine synthetic content check before proceeding at all with multimedia evidence. It is not just a quick check but a sufficient verification that establishes trustworthiness and confidence in the evidence and preserves its evidentiary value to the highest degree possible. Even more important is the entire procedure of preserving evidence in its most original or “virgin” state or being able to tell the story of what might have happened to evidence since it originally came into existence the very first time and whether it is authentic, original, inconclusive, or not.
Media and broadcasting companies were among the first to do routine deepfake checks in advance, because they have been used to dealing with questionable sources and fake content for much longer and burned their fingers a lot earlier in the process.
Amped Authenticate in the Courtroom
Amped Authenticate has made my life in court a lot easier in presenting evidence, analysis results, audit trails and conclusions. Due to its inherent scientific approach, it made it much harder to challenge my lab work and gave me the confidence to do occasional live discussions within the tool with legal professionals and even in legal proceedings.
Amped Authenticate supports the forensic examiner and his requirements very well and does not resort to oversimplifying or playing down the challenges of integrity, verification and authentication. It also offers strong batch processing capabilities and excellent implementation of carefully selected and tested scientific methods and parameters.
I make use of the “smart report exports” (Figure 1) quite often to get the case narrative going and to assist and ease the reading of the more detailed and “harder to digest” expert witness testimony in its entirety. Both approaches naturally make heavy use of Amped Authenticate bookmarks and annotations. While certain filters and scientific methods can provide indications of tampering or recapturing, they do not guarantee definitive conclusions and still rely on expert judgment.
The questions that judges, prosecutors or attorneys usually raise are about manipulation, forgery or tampering, cuts, duplications, and whether it is genuine. They do not intuitively think in terms of integrity and authenticity or camera originals and are not necessarily familiar with the forensic process of authentication and the search for forensic artifacts, inconsistencies, or conspicuousness.
I try to stay away from expressions such as forgery or manipulation in my reports. These terms intrinsically imply motive or malicious intent. In general, with few exceptions, such intent cannot be derived from multimedia evidence at all. This is especially true when ruling out other possible explanations, such as unintentional alteration by accident or not being aware of what software does to metadata during import.
I guess we all had our share of Photoshop and Adobe XMP metadata discussions and some experts jumped to malicious intent conclusions far too quickly.
Wording matters a lot. For similar reasons, I deeply dislike percentages to express confidence in conclusions or opinions or the use of “beyond reasonable doubt”, the latter being a privilege of law professionals and not expert witnesses.
The Amped Scientific Approach
I read a lot of scientific publications for my casework, especially in audio and smartphone app forensics. Some I can follow from a mathematical or signal theoretical point of view, others are beyond my grasp, promising ones I try to implement myself in Matlab and Python. Some scientific methods, as published, are merely proof-of-concepts. They work only under perfect conditions, are restricted to very specific inputs and preparations, or are simply too complicated or calculation-intensive to implement. Hence, they are not robust and versatile enough for real-life evidence.
In our daily casework, we often face non-ideal conditions and imperfect input variety. Amped Software excels at identifying milestone scientific publications with great potential for implementation that work robustly in these challenging scenarios.
One reason for sure is their own strong involvement in scientific research. The Amped Authenticate manual provides a lot of valuable information about the limitations of methods and filters, configuration parameters and scientific references.
Reporting and Expert Witness Testimony
Writing a good expert witness testimony that is well structured, to the point, and easy to read, without compromising accuracy, is both craftsmanship and art. It takes discipline, focus and experience. It never should become routine, and every case needs to be approached with fresh eyes and an open mind. Sometimes a longer break and returning to the case a few days later helps a lot, especially when looking at audio or video evidence repeatedly, as the mind can start playing tricks on us (“autosuggestion”).
Amped Authenticate and its mix and arrangement of scientific methods, filters, and parameters available does a great job of assisting the expert witness without constraining him. However, it is still up to the expert judgment and experience of the analyst:
- To decide under what circumstances, constraints, and context of a specific case an individual method of analysis is feasible, and hence whether it will produce potentially useful results or not
- Which factors or counter-forensics measures can possibly render a method inapplicable
- Whether to document analyses discarded and the reasoning for that
- To interpret the results cautiously
- To draw the conclusions possible and never go beyond that.
All this requires a very good understanding of the specific method, its scientific foundation, parameterization, implementation, and its limits and uncertainty. It is a bit like machine learning; if you leave the application domain of the model, it will produce meaningless results.
What I consider most important at the end of the casework is “reverse verification”. It means that every conclusion drawn, every opinion formulated, and every confidence or likelihood expressed can be traced back to the evidence at hand, comparisons, exemplars, data and analysis results, and nothing more. Failing this test, we are in danger of being speculative or conjectural. This is a good way to keep bias and opinion in check, hence maintaining a consistently objective and professional approach.
It is more difficult to objectify experience though. When introducing an experience statement, we should try to keep this professionally objective. Additionally, we should reverse-verify it and be conscious of the danger that it might be leading us toward bias as well. Who else agrees with that opinion, and is it a consensus in the scientific community? Don’t get me wrong, just quoting scientific papers is not the holy grail and not the solution to every challenge, nor is a productive stream of papers a guarantee of the competence of the author. Having said all this, additional intra- or inter-lab peer reviews are always a great way of quality control and verification. Every lab should have some written down procedures regardless of possible lab certification.
Finally, the analyses must be conducted and documented in a way that allows any other expert to follow the report and verify or reproduce the findings using the same methods, parameters, and tools.
Scenarios and Examples
Arsenal of Tools & Amped Authenticate
Among a mix of open-source tools or “forensically abusing” mastering, postproduction, video editing, video structure and video measuring and quality-assurance software, I have come to heavily rely on and appreciate Amped Authenticate. It has become my tool of choice for daily authentication work on photos, screenshots, individual video-frames, and recently entire videos. Just to mention, Amped FIVE has additional video features that nicely complement the video part of Amped Authenticate and add value and insights to other dimensions and aspects of video analysis.
I was thrilled when the great team in Trieste, Italy, decided to add video authentication to Amped Authenticate and nicely integrated it. The related features are constantly evolving. So has deepfake detection, which started with generative adversarial networks (GAN) and has recently been extended to the family of diffusion models.
In that context, I especially like the complementing powerful verification feature for shadows and light sources. Besides using SunCalc, MoonCalc and TimeandDate, I also recommend including weather data and vegetation analysis for circumstantial verification of photos and videos, e.g. unusual vegetation for the location or vegetation period for the date. Time zones and daylight-saving time play a crucial role in establishing forensic timelines. It is the safest option to express anything consistently in UTC, especially when evidence travels across the world.
Camera Ballistics and Smartphone Verification
Many cases nowadays revolve around smartphone recordings. If the (alleged) recording device is available, this opens a wealth of additional verification options such as folder structure and default filenames, application defaults, chat protocol context, timeline context, geo-data and SQLite lookups of photo and video data.
On the other hand, the evolution of AI and computational-assisted smartphone photography made our lives more difficult. The initial image may undergo alterations before being saved, so AI and modern image processors need to be considered for integrity verification and authentication.
We can generate verification (reference) photos and videos (exemplars) for comparison and an input for “camera ballistics”, the latter being a method of origin verification. The expression is borrowed from gunshot analysis to verify the relationship between a weapon and the ammunition fired due to the unique characteristics of the barrel markings left on the projectile.
The same idea applies to camera ballistics. No two image sensors of the same product model are exactly alike in terms of overall characteristics, noise, especially always dark and always lit photo-sites (sensor pixels) or other kinds of defects or manufacturing variations. In Amped Authenticate, this is combined with metadata analysis and JPEG quantization tables.
If the recording device is not available, we can still verify/falsify it with reference photos from the Internet that closely match the device, firmware and software/app version derived from initial analysis as a starting point or a suitable device available for lab analysis.
Screenshots and Screen-Photography (Recapture)
The Amped Authenticate Fourier analysis filter calculates and displays the DCT (Discrete Cosine Transform) of an image. Hence, it can identify and visualize the moiré effects caused by (re)capture from high-resolution monitors and their periodic structure. Left (evidence) is a screen capture from an iPhone, right (reference) is a screenshot of the ShareX application (Figure 2). The peak autodetection of the Fourier analysis filter does a good job of emphasizing periodicity (Figure 3).
Social Media Identification and Double-Encoding/Compression Detection
I chose this example because double encoding/compressions detection is of key importance for integrity verification and authentication.
The following evidence example was downloaded from my Facebook account’s photo archive (Figure 4). The Amped Authenticate Social Media Identification module properly identifies it as such (Figure 5). The Metadata has also been altered and reduced by the Facebook platform. Furthermore, the JPEG Ghost Plot clearly depicts two minima, providing strong evidence for double compression which is not to be expected by a camera original (Figure 6). The 71% quality appears to be related to the most recent compression, and 87% to a previous compression. A way to further verify this finding is the DCT plot function, showing multiple peaks in the Fourier domain which are artifacts related to double compression as well (Figure 7).
Video Analysis with Amped Authenticate Video
Amped Authenticate Video primarily uses the FFMS video engine and supports analysis-per-frame and hashing-per-frame, which can be used to detect duplicates. MediaInfo, ffprobe and ExifTool provide detailed insights into tracks, CODECs and all kind of video attributes (Figure 9).
GOP Analysis and statistics give an overview of the GOP structure (I, P and B frames), its repetition, deviations, statistical composition and whether it is fixed or variable (Figure 8). Group of Pictures (GOP) is a structured group of successive frames in an MPEG-encoded video stream for the purpose of inter-frame compression. A specific frame can be sent over to Amped Authenticate Image Mode as a PNG for additional analysis.
Figure 10 shows a positive compatibility match for PRNU Source Identification depicting a high PCE (Peak to Correlation Energy) value above the threshold, indicating a high correlation probability with the generated CRP (Camera Reference Pattern). The PRNU tampering detection tool (Figure 11) allows you to drill down on details and identify sections of a video recording that have been acquired with the reference device. Note that image stabilization of any kind is the enemy of PRNU video analysis and CRP creation and comparison. If avoidable it needs to be turned off or compensated by using a tripod or fixed mount for reference recording. Stabilized evidence might be unsuited for PRNU video analysis.
Conclusion
This article tried to establish the value of Amped Authenticate for photo and video analysis in the context of expert witness work and challenges. It has become an indispensable tool I have great confidence in, and I also have a great appreciation for the ongoing effort to scientifically evolve the field of integrity verification, authentication, and deepfake and tampering artifacts detection.
As a concluding remark, I’d like to emphasize the importance of not relying on a single artifact or analysis result for judgement and opinion. In general, it requires several conclusive results to express convincing conclusions with confidence. We should also never be afraid of communicating inconclusive results or the fact that we sometimes simply do not know or cannot sufficiently explain.