How To Investigate The Source Camera Of Digital Videos

Unique features that allow for identification are considered a real blessing in investigations. First, there was fingerprint analysis, then DNA analysis brought a real revolution, raising the contribution of scientific analysis to investigations at unprecedented levels. The field of digital images and videos is no exception: with the introduction of PRNU analysis, investigators finally had the possibility of attributing digital media to the specific device that captured it. In domains like CSE cases, non-consensual pornography, or the fight against terrorist propaganda, this type of analysis can really steer the investigation and become a compelling piece of evidence.

But what is PRNU analysis? In a nutshell, each digital camera has a sensor sitting behind the lens, whose work is capturing light and converting it to digital values, that will later become pixel values. Now, sensors are made of silicon, and researchers found that each tiny “pixel element” has some unavoidable imperfection which makes its response to light non uniform with its neighbors (PRNU stands for Photo Response Non-Uniformity). Sensors are made of millions of pixels, each with its peculiar response. Therefore, when put together, this non-uniformity pattern is highly distinctive. 

How do we use that? It’s a stepwise process:

  1. CRP Creation: You take some pictures of the questioned camera and use them to estimate the camera’s characteristic PRNU noise pattern (we call it Camera Reference Pattern, CRP). 
  2. Given an evidence image to be attributed, you extract the noise pattern from it and compare to the CRP: if a high correlation is found, there are strong chances that the image was captured by that device. 
  3. If image authentication is needed, one can even analyze the PRNU match between the evidence image and the CRP pattern at a local level, so as to detect manipulated regions in the image.

All of the above has been available for many years, in Amped Authenticate, for investigating digital images. However, if you had to deal with a digital video, there were no solutions in the market. But since the latest 19348 update, Amped Authenticate features Video PRNU!

The menu shown above reflects the various steps of PRNU analysis. 

Let’s go through the full process with a case example: we are given a recent Google Pixel 3 phone and we need to check whether an evidence video was captured with that specific exemplar.

Pixel 3 - Wikipedia

Create CRP

The first tool is devoted to CRP creation. Since videos are normally made of hundreds or thousands of frames, we normally need just one video to create the camera’s CRP. If we have the possibility to capture this reference video, there are some important rules to follow:

  • Turn off digital stabilization, if possible
  • Place the device on a tripod, if possible, and point it towards a blank wall or a bright sky (so we avoid polluting the CRP with content from a specific scene)
  • Avoid capturing in dark settings, and avoid capturing saturated areas (e.g., if we’re pointing to the sky, leave out the sun!)
  • Capture a video of at least 45 seconds (which should equal to roughly 1.000 frames)

In our sample case, we opted for the blank wall.

Ok, now we take the video out of the camera (in doing so, avoid recompression!) and write its path in the Reference Video input of the Create CRP tool:

The second thing we need to provide is obvious: where do we want to store the CRP file? The First Frame and Last Frame inputs allow us to only use a portion of the video (that could be useful if a 1-hour video is provided!). The Mode menu deserves some comments. 

By default, Amped Authenticate will use the standard procedure, “Use All Frames”: it extracts the PRNU noise residual from each frame of the video and accumulates it to form the CRP. The only problem is that it takes some time. For a 4K video, creating a CRP from a 60s video can take even an hour or so. If we’re in a hurry, we may rather consider the Group Frames strategy: it will average together groups of frames, and extract the noise residual from each averaged frame, and then accumulate residuals as before. Since averaging pixels takes much less time than extracting noise residuals, the Group Frames strategy can reduce time by 10x. The user can choose the size of the group of frames with the Frame Group Size input.

One important element is the Digitally Stabilized Video menu.

Digitally stabilized videos are still a challenge for PRNU analysis: the PRNU pattern in consecutive frames gets misaligned because of stabilization algorithms, and this makes it much harder to accumulate the noise through different frames. Therefore, Amped Authenticate will not compute a CRP if a stabilized video is used as reference. If the user knows that the video is not stabilized, they can set “No” in the menu and proceed. If, instead, they know that the video is digitally stabilized and set “Yes”, Authenticate will show a warning message and stop. If the user does not know whether the video is digitally stabilized, then they can choose Autodetect, and Authenticate will automatically try to detect traces of stabilization.

In our sample case, digital stabilization is not an issue: the Google Pixel 3 uses optical stabilization, which mechanically moves the sensor, and this causes no harm to PRNU analysis. Moreover, the reference video was captured with a tripod, so digital stabilization would not have had any effect anyway.


After creating the CRP, we are ready to check whether the evidence video we have is compatible with the alleged source camera. We open the Identification tool from the Video Tools > Video PRNU… menu, and we’re presented with this dialog.

We have set the path to the CRP file in the PRNU CRP Filename input and the path to the evidence video in the Evidence Video input. The rest of the inputs are similar to what we’ve seen for the Create CRP tool. The only additional input is the Threshold for PRNU Camera Identification. That value determines the threshold above which the correlation score (Authenticate uses the Peak to Correlation Energy, PCE) is claimed to be a positive match. The default value of 60 was determined by looking at the scientific literature and by validating our implementation on native videos of the VISION dataset ( With such value, we obtained an average true positive rate of 80% for a modest false positive rate of 0.2%. However, when working on a real case, we may prefer to find some devices of the same model of the alleged camera, capture some reference and test videos with them and make a dedicated validation. This will bring more evidential value to findings.

That said, after running the process, we’ll be presented with this simple output table:

As usual, in Authenticate, the table shows the information needed to allow analysis repeatability: CRP and evidence files path, the MD5 computed on values of the reference CRP and of the estimated video PRNU, the number of processed frames. Then, in line 13, we see the obtained PCE value: 5318.76, which is largely over the set threshold of 60. Therefore, the Compatibility row reads Positive. The fact that No transformation is written in row 15 and None in row 16 indicates that there was no need to rotate or rescale the evidence video so as to match the CRP. 

You can export all this information by right-clicking on the table and choosing one of the available options.

And we’re done! We’ve linked the evidence video to the alleged source device with just a few clicks.


Although PRNU analysis is mainly famous for its ability to link media to the source device, there’s an interesting additional use: tampering detection. When dealing with images, Amped Authenticate has a dedicated PRNU Tampering filter in the Local Analysis section: you set a CRP file, you load an image, and Authenticate will compare the PRNU in the image with that in the CRP on a local basis (“block-wise analysis”) so as to expose regions that have been tampered with. Indeed, when you alter pixels, you’ll be normally disrupting the original PRNU signal.

With videos, we replicated the same idea but in time. 

We can analyze the evidence video computing the PRNU from groups of frames and compare each group’s PRNU against the CRP. Then, we plot the values obtained for each group. If the whole evidence video has been captured with the camera to which the CRP belongs, we’ll have a nice plot where the PCE always remain over the threshold value (represented by the red horizontal line). In our example, that’s indeed the case!

You may have noticed that the plot has a “staircase” behavior: it is normal and it reflects the fact that we have a different PCE value for every group of frames, not for every single frame. Therefore, in the plot all frames belonging to the same group share the same value. By setting lower values in the Frame Group Size input in the Tampering tool we’ll be able to have finer time resolution (at the cost of lower PCE values, because we’re estimating the PRNU from fewer frames); on the other hand, by setting a larger group size, we’ll have more accurate PRNU estimation (which means, higher PCE values for matching frames), at the cost of lower time resolution.

The plot shown above helps us in saying that, not only the whole evidence video is compatible with the alleged camera, but also that we could not find any trace of frame insertion from other devices. As opposite, this is a plot obtained on a video where the central 400 frames were pasted in from footage captured with a different device.

It should be noted, however, that this analysis technique will not detect the case where frames are pasted from a different video captured with the same camera (the PRNU would be the same, since the sensor is the same), or the case where some frames are simply “cut away” from the original video.


In this How To, we have seen how we can link a digital video to its originating device. PRNU-based source attribution is one of the most established analyses in multimedia forensics and is currently used and appreciated by law enforcement and forensic labs worldwide. Amped Authenticate makes this technology available for video as well, adding yet one more important element to the toolbox of all forensic video experts!

For more information contact Amped Software at [email protected] 

Leave a Comment