Deepfake Detection And Authenticity Analysis In Amped Authenticate

Marco: So, welcome and thank you for joining this webinar about deep fake detection and authenticity analysis in Amped Authenticate. Let me just spend a few words about me and about the company. So, I am Marco, and I’m a computer engineer with a PhD in multimedia forensics.

So, I’ve been studying authentication of multimedia contents since more than 10 years now (getting old). And at Amped Software, I’m the forensics director, so I’m coordinating the product roadmap and the training, the support, and I also coordinated the research activity cause we have many partnership with universities and students. I also have an experience as an expert witness for digital image and media forensics in Italy.

More interesting about the company: so, Amped Software is an Italian company founded in 2008 with a subsidiary in the US. You can see part of the team here, almost all of us in this picture. And the mission of the company since the very beginning, was to become the one-stop shop for all the needs related to image and video analysis, announcement and authentication. And the vision is justice through science.

That means that whatever we do in our work and in our software is based on science. We like to make things very clear, reproducible, accurate, okay? So that you are safe when you use our software to go in court with the results that we provide. We’ll see what it means during the demo. We have users in more than 100 countries. And so, let’s get started.

So, just a few more slides and then we’ll go into the software. Why should we be interested in digital image authentication and analysis? Well, because if you think about the 5WH investigative model, which says that when you’re trying an investigation, these six questions: who, where, what, when, why, and how? If you can answer all of these, you should have quite a pretty clear idea of what happened and who you are challenging, okay?


Get The Latest DFIR News

Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month.


Unsubscribe any time. We respect your privacy - read our privacy policy.

Well, if you consider images and videos, they can often answer all of these questions, if you go beyond pixels, okay? Also, I like the ABC rule that you can find in this book, okay? Which means assume nothing, believe nothing, challenge and check everything, which is really much about what the Amped Authenticate is for, okay? When you’re provided with an image or a video, you shouldn’t just buy what you see, okay? You should check whether what you see is reliable, okay? And we’ll see what it means in the rest of this webinar.

So, with Amped Authenticate, you can verify the integrity and authenticity of images before admitting them as evidence. We see that integrity and authenticity are very different. You can extract valuable information from image metadata such as date, time, position. You…most importantly, you can cross check this information, as we will see. You can analyze the source device to link illegal images and videos to perpetrators. Just like you do with the bullet in the gun. You can do camera ballistics with Amped Authenticate. And since 2021, we’ve started adding support for digital videos as well. And we’ll see some examples today.

So, let’s start with an example. We are given this picture by this guy who claims he cannot be guilty because that day, at the time he was sitting here having fun at the dock, okay? And the problem is, first of all, is this image consistent with the alleged source device? So, he may say, “I captured this image, my smartphone, you know, and this is it”. We may check whether it is actually compatible with the smartphone. We could check whether instead it comes from a social media platform. Perhaps you downloaded it from Facebook, okay? Is it a true and accurate representation of subjects and events? So, answering this question means assessing the authenticity of the image. And then, has the image been processed in some way?

For example, cropped to remove something, resized, compressed, and so on, so forth. And of course, eventually, have some pixels been altered or manipulated? So, with Amped Authenticate, we can answer all these questions or at least get information that helps answering all these questions. And as you will see it’s not just about getting project localization map and saying, “okay, this is the blob, this is the fake region.” mage authentication is much more than that. It’s a, let’s say, complex process where you have to put together multiple pieces of information.

So, let’s dive into the software, but before doing that, let me add just one more slide, which is this one. It is a very simplified model of the digital image lifecycle, okay? Very simplified. So, we have the actual scene here on the left and the scene go through the…so the blue box is a digital camera, okay? (Which could be a smartphone as well.) The image goes through a lens, and then through an optical filter, for example, it will remove infrared light, okay? And then go through the CFA, color filter array pattern, because the imaging sensor, which captures light intensity cannot capture color.

It will just count the photons falling on the sensor. And so producers use this kind of patterns to capture different colors in different image…you need different pixels on the sensor. And so then you need to interpolate this mosaic of colors to get a full color image, okay? This is still the case with modern smartphone, although it’s getting more complicated, we still have this kind of processing inside.

Then we have in-camera software processing. Once upon a time it was white balancing, contrast, saturation. But nowadays, of course, you may have every kind of fancy AI processing to beautify faces, remove artifacts, you know, and whatever else. And eventually you have some in camera compression. Could be JPEG, could be HEIF compression if you’re using iOS. And then you have addition of metadata and possible addition of thumbnail and the preview. And this is the image that comes out of the device.

But as you know, most people today will not take a picture if not for sharing it, so we may have also some out camera processing, which could be sharing through social media platform. Perhaps we may have some editing or some announcement and multiple compression steps as part of this process. And eventually we are given this image down here and we want to blindly, so just given this image, okay, we want to assess whether we are dealing with the camera original image, which is the first version produced inside the device, okay? Or we are dealing with the processed image, and if so, we want to know whether this is authentic or not, okay?

So, as you can see, these are two different questions. One question is about integrity, which means is the image that I’m seeing, the camera original image, the first version created inside the capturing device, or not? And if not, okay, may wonder whether this image is authentic, okay? So, can I trust what I see here or not?

So, let’s go and see how we can do that with Authenticate. So, let me show you the main software interface. As you can see, this is the interface of the software. Here on the top you have the evidence image loader. So you can drag an image file here and it will be loaded. Let me do it now. I will load one image here. There we go, okay? And you can also load the reference image if you may…if you like, which will put it side by side for comparison, okay? In this case let’s use the evidence to start with.

So, we have a picture of a car here and we have here the filters panel. The filter panel contains many, more than 40 filters for analyzing the image, grouped into categories. You can see we have the overview, final analysis, global analysis, camera identification, local analysis, and geometrical analysis. We can start from the overview of course, where we have the visual inspection and the file format. The overview category is for giving the first triage of the picture, okay? Checking what this image is about.

So, visual inspection allow us to watch the content of the image. We can zoom in, okay? And we do not interpolate pixels. This is not an enhancement zoom. This is just a zoom that allows you to check the information that you have in the pixels of the email. So, pixels will just get larger, they will not be interpolated, okay? And the file format filter gives you a quick overview of the main properties of this file, okay? So, for example, we will see the hash value, of course, then we’ll see the format, the resolution, the expectation, presence and size of the company or the preview, presence and the amount of different kinds of metadata.

So, we see we have several EXIF fields and some color profile fields. The EXIF make, model and software. Some information about the JPEG compression. Some information about the EXIF date, create date, modify date, and some information about today file modify access date, okay? As you can see, we don’t see any red writing in this column, which means that this image has no warnings. So, this image is indeed a camera original image that I took with one smartphone, okay? And so everything is right. We will see that when we load the processed images, most of the time we will get something in this column, which is already telling us that the image is not a camera original image. So, its integrity is somewhat compromised.

Now let’s go back to the visual inspection. We can see that we have an image of a car in the street, okay? We can see the sun is positioned behind the car, okay? If you go to file analysis and check the EXIF rate of the image, you will see that we have a load of information. We have more than 70 fields, as you can see. And we also see that we have create date and also GPS information as you can see down here, okay? Yes, GPS.

So, that means we can go to the tools and if we are connected to the internet, we can show the image location on Google Maps, which will just…oops, sorry…open up Google Map. Yes. Yes. Google Maps, there we go. And we see that we are positioned here. We can put the satellite view…and yes, we can see this building here with solar panel on the top of it. And if you go back to the visual inspection, yes, you can see them right here, okay? So, it seems that the GPS metadata is indeed reliable. We can go further on and check also the sun position on suncalc.org, which is a website, okay? We have the interface to connect this website. And as you can see, we can see the light position here, okay?

It means that the sun, that time, that day (based on the EXIF information in the image), was positioned here. And since the car is parked here, it is perfectly reasonable to see the sunlight falling over these buildings as we can see indeed in the image. You can see here, you can see here, okay? So, this is a way to check that the metadata you can find in the image, you know that metadata can be easily manipulated, okay? With the EXIF tool or other softwares. But it is hard to manipulate them and keep everything consistent, okay?

Then you can go even further on with your analysis with the JPEG quantization table. This will tell you how the image has been compressed to JPEG. Some devices use customized tables. These specific device use a JPEG standard table, okay? Some other device use device specific tables, which allows you to check the quantization table against database of tables. And so we can check whether the information that is declared in the X metadata, that you can read here on the top, is indeed consistent with how the image is stored, okay? For example, in this case, we can see that this quantization table is indeed used, it is a standard table, but we can see that Google Pixel 3a is known to our database to use this kind of table at this quality, okay?

And you can check the hex content of the image, of course, and you can check the JPEG structure. When you are analyzing the integrity of an image, it may be very helpful to look for reference images that you know to come from the alleged source device. So, in this case, it will mean that we’ll need to look for a Google Pixel 3a smartphone, capture some images and try to compare them side by side with our image. If you cannot access this kind of devices, you can look for F reference images on the web. We have this tool here which will search for images on clicker and it will automatically filter out images that are known to be processed by Photoshop or this kind of software because it’s written in the metadata, okay? You can click on search. I will not do it now because it can take some minutes.

I’ve already downloaded some, okay? And once you downloaded the images, you can load one of them here and compare them side by side. Or you can do also a batch comparison to compare the properties faster. This case, I’m loading this reference image that comes from the web. You can see the classical Flickr username…sorry, file name here, okay? And as you can see down here, if we go and compare the various things, we see that yes, there is just a slightly different number of EXIF metadata, a slightly different smartphone model.

This is an Excel version of it, okay? But still, if you compare the quantization tables, they are same, okay? And if you compare the JPEG structure, which is how the file is structured inside, you can see that side by side, they are completely identical, okay? Which helps building us our confidence that this information about resource device is indeed reliable. So, this is all part of the integrity verification of an image, okay?

We also have a tool here to check whether an image comes from a social media platform, okay? It is a classifier. So we click on it. You see that this image here is not compatible with any known social media platform. But if we load another image, for example, this one, which I downloaded yesterday from Facebook, okay? You can see that this time…sorry, this is the picture. And you can see that this time the image is classified as compatible with Facebook and we also analyze the file name and this kind of file name is indeed compatible with Facebook, okay?

So, both the properties of the image, how the image is compressed, okay? And the filename of the image, they’re both compatible with Facebook. And you can do the same. We have, I think seven or eight social media platforms in our database. These image here has been captured yesterday in Brook Cell with our CEO and our sales director attending an event there at the European Parliament. It is from a Twitter profile, okay? And indeed you can see that the image is classified as coming from Twitter, okay? So you can use this filter to check whether an image comes from a social media.

Good. Let’s now go to the next step of the analysis. So, we have seen some quickly…some ways to assess whether an image seems to be camera regional, so it has integrity or not. And as you have seen, we are doing this in a blind fashion. We cannot…we are not relying of external information, side information that is shipped together with the image. We are only using the image that we’re given and its metadata and its properties.

We can do more than that. We can go and check the processing history of an image, which means trying to understand whether an image has been compressed, compressed once, twice, and so on, so forth. For example, let’s take this image here. This image has been captured in Sienna, which is a nice city in Italy, okay? And it has then been compressed, okay, with quality 90, okay?

Now if you go and check…we have this global analysis filter category where we have several filters, which aims to understand whether the image has been processed as a whole or not. Particularly in this case, it’s interesting to see the JPEG Ghosts Plot, okay? Which is computing right now. And what this filter is doing, it’ll recompress the image 50 times from quality JPEG 51, 52, 53 up to 100, okay? And it will compute the difference between the evidence image and its recompressed version and show you this plot where the average difference, okay, is computed…global difference between the images computer and displayed.

Now, as you can see here, we have a descending behavior which is expected because the more you use on high quality (100 is the maximum), the more similar the compressed image is to the original. But we can see a strange peak here, okay? A local minimum in quality 90, okay? Which is indeed the quality at which the image was compressed. Okay, of course we could have read this from the JPEG header, of course, there is not much surprise in that it is just a crosscheck. But of course if I would save this image to PNG and run the same analysis again, I would still see this peak here, okay?

So it will still detect that there was a JPEG compression at quality 90 in its processing history. But most interestingly, if I now recompress again the image at quality 80, okay? And so, I analyze again the image that has been compressed twice, you will see that yes, I still see a peak corresponding to the last compression quality, which is 80, but I still see also the peak in 90, okay? So, I still see the peak due to the previous JPEG compression.

So, this filter helps us…this kind of analysis which is published based on a paper in State of the Art, allows us to understand that an image has been compressed more than once throughout its life cycle, which of course is quite against its integrity, of course, because we normally expected that image and…with integrity and original image has been compressed only once at time of acquisition. That’s the idea.

Okay, we have many more filters here in the global analysis, but they are quite technical, okay? So I will not go into the details of all of them, but keep in mind that together with the software, you can also get the training where we spend several days together explaining you why this kind of plot, cone-shaped plot should warn you, okay? And raise your attention on this image. So, if you’re interested, just ask for the training.

Okay, let’s now move to the authenticity part, which is of course the most funny. So, here we are trying to answer a different question. Let’s go back to the image from which we started with this webinar. The guy at the dock, okay? What we want to understand here is whether this image is authentic or not, which means whether we can trust its content or not. As you can see, the moment I’ve loaded it, I see this filter is now written in red, okay? It means that there is a warning raised by this filter. And indeed, as you can see, we have several things written in red now here, okay? For example, we have Photoshop written in the EXIF software, okay?

So, here the forger was not very clever. He left this trace in the image metadata, okay? And so we raise a warning because you have a table with 10s and 10s, perhaps 100 image editing software. So, when we match one of those, we show you a warning. And then we raise several other warnings, like we check the dates and we see that the modified date in the EXIF metadata is much, much later than the create data. And so you get this warning here. We see that the way the image is compressed, it’s without Chroma Subsampling, which is not very typical for camera original images, at least until a few months or years ago. And so you get another warning.

This doesn’t mean that the image is for sure tampered with, or its integrity is for sure broken, but well, the fact that it’s been opened in Photoshop and saved, it’s quite a tell-tale. But what we want really to understand here is whether the authenticity is compromised. And so we have to go to the local analysis filter category. Here we have many different filters whose goal is to understand whether some part of the image has been manipulated, okay? Once again you can see we have many of them, but a couple of them red.

So, let’s go here and have a look. And by running this Aligned Double JPEG compression analysis (which is again based on a scientific paper), we see that the face here gets completely red, okay? So, this filter works under the assumption that an image when you capture it is stored as JPEG, the original image. Then you open it in Photoshop or whatever other software, you use whatever tool you want to tamper some part of the image, perhaps it’s a painting tool or AI based in painting tool, whatever you want, and then you save to JPEG again. In this workflow, it will happen that the original pixels short traces of double JPEG compression, okay? Because the image is processed block wise.

So, the blocks that were untouched are JPEG compressed twice, while they manipulated blocks, okay? You will only see the last compression step because the previous one is erased by the manipulation. And so we can detect here that the face, okay, has something wrong. And then if we zoom on it, yes, we can see indeed that there is something wrong. And so we could go back to visual inspection, for example, and start annotating this image.

So, let me introduce you the project panel here. Whenever you find something interesting, like the result of this filter, you can click on this flag and you create a bookmark for that, okay? You can double click on it and you can add comments for that, so…And you can also apply some annotations to the image, okay? So for example (well, it is not really needed in this case), but you could apply an arrow here to clarify what you mean. Or you could apply a text object, a free hand pencil, okay? And whatever else.

So, we could also go back to the visual inspection and we could try to magnify now the face of the subject here. So, we had the bookmark, we annotate it, and we drag a rectangle here and we place it here, okay? And as you can see, if we increase a bit the zoom level, you see indeed that there are some strange artifacts here over the eyeballs, okay? And so we can add some text below. Yeah. And of course then you can configure the look and feel of this text annotation and fit it. Okay. There we go.

Okay. Once we are done, we can save the project and generate a report for that. We can perhaps do it now. So, we save the project and we generate the report. When you generate the report, you can choose your preferred output format. We go for PDF in this case. And here you get your report for this image where you have the information about date and time.

Okay, I didn’t write the author, but you can write it. Information about the platform and then the list of bookmarks that you created. And for each one you have the input file, it’s MD5 hash, and then you have the full detail of each filter that you used, how it was configured, a quick description of what it is and the complete details even of the annotations that you’ve put there, and the bibliographic references to the papers which explain how this filter work, okay? So, that’s what we meant when we said that we work with the vision of justice through science, because what we have in our software is published stuff, okay? We are not using magic secrets to produce results, okay? And here you have your report which you can submit to court and you are very protected because you have a full details of whatever we have done.

Okay, let’s see some other examples, okay? Of authenticity verification. For example, here we have a picture of a desk, okay? Which looks nice at the first look. Once again, if you go to file format, we see that we have a different retouching software this time, okay? And here we use a different filter, which is the Error Level Analysis, which is again, based on recompressing the image. And you can see that, yeah, there is a strange suspicious blob here on the desk, which suggested probably something has been removed, okay?

We could use the Histogram Equalization filter to have yeah, a processed version of the image where you can see indeed that here on the desk something is wrong, okay? But it was not as evident in the original image. And that after looking at the result of this filter, your attention is brought to this point in the image where you can see the manipulation.

Another example we can see similar to it is this document, okay? We have an invoice here, where Mr. Notso Smart spent 64,000 euro in some malware cleaner and virus scanner. And if we look at this image, we see that this filter turned red, okay? And if we click on it, hmm, we see that we have a red blob here and also here. So apparently here, probably there was a signature that was removed. And once again, if you check the file format this time, the Pixlr app for smartphone was used.

So, as you can see we didn’t do anything strange. We used classical, off the shelf image editing software to manipulate the images, drop them into Authenticate, and then analyze them, okay? So we are not cheating. Of course, we cannot show through cases in a webinar, but this is very similar to what would happen in a real case, okay?

One different way of manipulating an image is by cloning, okay? Cloning means that instead of hiding something by removing it, okay, you copy and paste part of an image over the image itself. This kind of forgery is commonly hard to detect because of course the noise level of the image remains the same because you’re copying part of an image inside the image itself. The camera traces remain the same because it comes from the same image, so the same camera. So, what we have is a couple of filters down here, Clones Block and Clones Keypoints, okay?

That will process the image and look for suspiciously similar regions within the image. Now this is a very well made forgery, okay? That I could leave you here for a couple of hours and it would be very hard to find the manipulated region. But if we have a look at these two filter outputs, you will see that we have some tell-tale results down here, okay?

We see these lines connecting the stones. And so if we go and zoom on it, we now realize that this stone is very similar to this one, and this one to this one, and this one to this one, and so on, so forth, okay? Not only if we use this other filter here, Clones Blocks, we see that not only the stones, but also the background here and the part of the street, okay, is suspiciously similar in these two regions. And the explanation here was that in the regional image (that I think I have, let me go to visual inspection), yes. In the original image, we actually had (let me load it as a reference) someone walking here, okay? And so they removed this person by copy pasting formation, and as you can see, it was very effective in this case, okay? So this is about clone detection.

Okay, one more thing we could check is…oh, yes. In case you have an image which is embedded within a PDF file, like in this case here, okay? So, we have a traffic offense complaint, okay? Where someone is saying that he could not be where the police claims he was, okay? Because he has proof that it was somewhere else this day, this place, okay? So, if you’re asked to authenticate this image inside a PDF document, do not take a screenshot of it, okay? Because of course when you take a screenshot of an image from the web browser, from whatever else, you are losing loads of information, you are losing all the metadata (if there are some), you are losing the compression properties of the image because you are capturing it from scratch to the screenshot, okay?

Authenticate has a tool to extract the image from the PDF document, which is just here under the tools panel. You have the ‘Extract Embedded Images From PDF’, okay? (Let me just look…go to the proper folder…quickly, without browsing everything. Okay, there we go.) And you click extract, and it will scan the PDF and save all the images inside an external folder. There we go. The folder should open up. There we go. And yes, we have these images here. (Just a moment, let me load them. Yeah, there we go.)

So we extract the image and…yeah, as you can see, it may happen, like in this case that when you put an image in a PDF, it may even retain the EXIF metadata, okay? Not necessarily that will be removed when you embed an image in the PDF. And so in this case, we can still see that the image was processed with the Photoshop and we can run a complete analysis of it and, for example, detect that the license plate probably in this case, okay, was manipulated. And as you can see in this case, who manipulated the license plate also took care of mimicking the motion blur effect, okay? So that it is very, very convincing, okay, This forgery because the license plate has been degraded to match the way the car is degraded in the picture, okay?

Good. Let us now go to another more recent topic, which is deep fakes and synthesized images, okay? I think I will get some more slides to think about this…talk about this, sorry. (Yes, these ones. Okay.) So, in the last few years we have seen several new ways of creating fake pictures arising. One is generative adversarial networks, GANS, which are the main tool behind many deep fakes, okay? So, a GAN is a…yes, a generative adversarial network. It means that you use two neural networks, two deep neural networks to create images.

It works this way: you have a generator network that starts from random pixels and try to generate a face, for example, okay? And then you have a discriminator network which is trained with real images of faces, okay? And the goal of the discriminator is to understand whether what comes from the generator is a real image or not. And of course at the beginning it is very easy to say that it is not, of course. And so this is fed back to the generator, which will gradually improve and improve in the attempt to fool the discriminator. And after some iterations (where some is of course thousands), you will eventually convince the discriminator that your image is real.

Once you’re done, you drop everything, you just keep your generator and you have a generator which creates faces. And this is what’s behind this process’ data treatment. As you can see, basically you need many real images and quite a few computational power, and you can then generate faces like the website thispersondoesnotexist.com, okay? Well, you can generate as many faces as you want, okay? These are all hallucinated faces. These people actually do not exist, okay? But they are very convincing, although sometimes you may have some visual inconsistencies like in this case here, okay? But as you can see, the hairs and everything looks very convincing.

Now, to combat this kind of fake images, we have recently added a dedicated filter into Authenticate, okay? So let me show an example of it. And it is called Face GAN Deepfake, okay? It works this way. Let’s say you have this face here, okay? This comes from an online data set of real faces. So, we know that this indeed real, okay? Is a model, okay? And we go to this filter and we click on the medium overlay, okay? It will extract the faces in the image, classify it, and provide you with the label and the score, okay? A confidence score. As you can see, in this case, the network analyzed the image.

So we developed enabled network to classify these images. This has also been published in a paper, okay? And our network classified this image as real, okay? With, yeah, sufficient confidence: 81, and the maximum is 1, of course. While if we load for example, this kind of image here, which is also very convincing, okay? But this time it is fake, okay? This was generated with an online service that creates faces, okay? Of course it is not always as trivial because you will usually create one of these face and use it to create a fake picture.

So for example, you may have a picture like this, where you have me and another person. And so, since you have two faces, you can already see the thumbnail appear, okay? The network will classify the two faces separately, okay? And classify each of them, and attach label to each of the faces. Ah, there we go. Okay. So I am real, luckily! And this face here is a fake, okay? Actually it was downloaded from thispersondoesnotexist, so this was the actual image, okay? And after downloading this image from this generational service, I edited it to put it inside this picture. So I changed the color, I rescaled it, and I adapted it to fit better inside this picture.

And despite all this processing, you see that the confidence has decreased from 1 to 0.83 because I processed the pixels, but we managed to detect that it is a deep fake anyway, okay? Now, it is important that you understand that when we say ‘real’ here, it doesn’t mean that the image is authentic. It is one of the two classes on which this network has been trained. So, this network has been trained to detect either deep fake images, or real images, but ‘real’ means non deep fake, okay? Could still be manipulated, but in some other way, that’s the idea.

And, so yeah, this will work even if you have images with many faces. Like, for example, this one here, let me show you. This is a picture from one of the papers actually where they presented the face synthesization software, okay? And if we run the analysis for this image, it will attach a label to each of the faces. And as you can see, it will correctly detect that they’re all fake, but when they get too small, okay, the network stops working and it is misclassifying the faces.

This is normal and typical in forensics: when you lose too much information because you have very little resolution (in this case, they don’t scale the images too much), we don’t have any more sufficient traces to detect the images are fake, okay? So, keep in mind mind that resolution matters. The amount of data that you have matters. Since it could be hard to read labels or spaces that are so small, we have a collage version of the filter which will extract all faces and rescale them to put them in a collage like this, which is much better for creating the report.

Okay, so this was about face GANs. GAN generated images are very different than synthesized image. Like those you can create with Midjourney and this kind of new services, okay? These are known as diffusion models, okay? We will not go through the theory of it, but basically they’re based on the idea that the architecture is created on two steps, okay? They start from original images and add progressively more and more noise and create noisy images. And then in the generation phase, they start from noise and denoise, denoise, denoise the image. And in doing so, you can condition this denoising to create some kind of object, okay?

And so, for example, I used Midjourney to give a text prompt like a man in front of a computer delivering a webinar, photorealistic. And I got this picture here, okay? This has been created in a few seconds by Midjourney, which is one of the many available services for creating this kind of faces. Now, detecting this kind of images is not the same as detecting face GANs. So we do not expect our face GAN filter to detect this face as a GAN generated image because it is not, it has been created with a completely different model. However, that is still something you can do already with Authenticate to detect this kind of images. And let me show you a couple of examples.

So, let’s start from this image here. (Sorry. Yeah.) Okay. This is an image. You can see the prompt they created here. It’s a CCTV image of a car accident in sunlight, okay? And it created this image here, okay? And you can see that the image is quite photo realistic, okay? But we have many shadows. We have a sun position and up here and we have shadows in the image. Now, diffusion models do not have a rate tracing system to create shadows. It created quite convincing shadows if you look at them because they’re all more or less in the same directions. But we can check whether they are correct or not with the shadows tool, which is under the geometrical analysis category, okay? So let me load it. Okay. Let’s start from scratch, okay?

So, basically what you can do is you go here for every object that you may want to use, like this car, okay? I click (let me enable this one), I will click on a point on the shadow, and then I define a wedge which tells which part of the object could be responsible for that point on the shadow. I don’t know whether it is this part, this part, but for sure it’s somewhere between these two, okay? And this creates a wedge. As you can see, this way. Then you can do the same, for example, with this pole, okay?

So, we click here and since this is narrow, it will help us a lot like this, okay? And as you can see, these two intersect appear, and so the system is feasible. It means that all the wedges that we drawn so far (just two), okay? They intersect each other, which is good. But now let’s continue, okay? And we have this object here. So, we can click here and yeah…and a wedge here. Okay.

You can see that the feasible region is getting smaller and smaller, okay? And now we can go here, and like this, okay? And then we have this shadow projected by this pole here, okay? But since this is the top of it, okay? In principle it should be something like this, okay? And of course you can see that this wedge here is not intersecting any other, okay? Or, for example, we could have done the same with this other object down here. And as you can see, okay, the system becomes unfeasible. It means that there is no longer a common intersection between all these wedges, okay?

Just to give you an example, okay? I could load an image of a real car accident and show you the same approach, okay? Like here. I already computed it here. And as you can see, despite I’ve drawn many wedges also here, they all actually intersect up here somewhere. It’s in the infinity, so we cannot see where, but the system is feasible. It means that there is a point on the image plane far, far up on the right, okay? That is the intersection of all the wedges that we have. This is expected in a real image, okay? Where you have a point wise light source.

We can see also another example with a picture of person in the sun. So, we have this picture here, which is very nice. And if we start…oops, sorry, yeah…if we start drawing wedges here. I already did it to save some time, because otherwise it take too long, okay? You see that all the wedges intersect indeed where the sun is, okay? Which is what we expect for an authentic image.

So, even if you draw many of them, if the image is authentic, this system will remain feasible. If we try to do the same, but this time with the diffusion model generated image, okay? Even with just three wedges, you see that the system becomes unfeasible quite fast, okay? Because of course, shadows are created in a visually convincing way, but mathematically not so much, okay? And so this is a good way to detect an image is a generated image, a synthesized image when it is in the open door or where you are in a room with a single point wise light source.

A similar trick can be used, okay, for reflections. So, it often happens that the image is created by these networks, the authors tend to include reflections in them because of course they’re very nice to see, okay? And very compelling. But there is a point here, which is interesting, and the point is that when you have an image reflected in a mirror, okay, if you connect points on the object to the reflection point in the mirror, all the lines will converge to a vanishing point. Let me show you an example of that that I prepared, okay?

So, I saved the project for this image. (Okay. Yes. Let me just turn this off, so we don’t get confusing colors here.) Okay. So, as you can see, I have linked drawing lines with the annotate tool. I have linked points, okay? That I marked with this red markers here. So the border of the eye with the border of the eye, the nose with the nose, and the bottom of the ear with the bottom of the ear. And this finger with this finger and all these lines together, as you can see, they intersect somewhere here, okay? Perhaps not exactly, but more or less you can see that there is a vanishing point somewhere here.

Now, if we consider instead a Photoshop…photograph of a woman making up in front of a mirror, okay? This time you see that the lines do not intersect at all. You can see we have an image, you have the reflection, but if you connect the points, okay, they do not intersect all in the same vanishing point. For example, this red line here is completely off, okay? Because of course, this kind of diffusion models do not have the physical geometrical constraint to keep this kind of information consistent during the process. That’s the idea.

Okay, good. So until now, we have seen ways to process images individually. Authenticate also provides you ways to process images in batches, okay? And there are several ways you can do that. One way is using the batch processing tool. (Let me load the folder images to give you an example.) Okay. Say we have this folder here, okay? With a few images, it could be hundreds of course, here I just put six of them. I load one of the images. (Let me close the project. Okay.) And let’s say I want to batch process the image. I have this tool here, okay?

I can process all images in the evidence folder or included folders as well. And I could set a custom list of filters and configurations of them. I can customize it, okay? And run all those filters on all the images in batch. So, I can go home, the results will be stored and the day after I can come back and just work on the processed results, okay? This is very useful and time saving because some of the filters can take a while to compute, okay? But of course you still have to go manually through all the results.

The other option that you have is the smart report, which as you can see is very easy configure. You can say ‘process all images in the evidence folder’ and you just hit ‘okay’. What this will do, it will load every file. First it will do a quick check of the image properties, and if no warning is found from the properties of the image, a green light will be attached to it saying this image is probably camera original, and it will be skipped. Instead, if some warning is found, the image is sent through a set of automatically chosen local analysis filters, that run through the image, and if some of this filter, detect a warning, it will be marked as a red light.

So in this case, we see that three images has been marked as green. So if you click on them, we can see the detail. The analysis properties. If you hover over it, you will see an explanation of each of them, okay? Yes. And since no warning has been found, the image has been classified as green, okay? So we save lots of time because images that show no traces of manipulation here will be just skipped.

So, this is more of a triage tool, okay? While if you go to the images market with red, and we click on this one, for example, you see that it was sent through local analysis because it was in bitmap format, there was no thumbnail, so this was bit suspicious (for a camera original image). And indeed, as you can probably see already, one of these two flowers is cloned, and indeed we have the Clone Keypoint filter (turn it on), and we can also see that the JPEG Ghost Map filter is telling us that probably this is the clone and the flower, okay?

Also, the ELA is marking this as the suspicious one. So, the one on the right is probably the original, the one on the left is the clone. And then we have the images marked as green light…yellow light. This one, for example. You see that in this case, this image has been sent to analysis because it had not thumbnail, no metadata at all, okay? And the reason is that this image was downloaded from Facebook. (We can tell from the name, you see FB, okay?)

So, it was downloaded from Facebook, so metadata was removed, so the image is no longer a camera original image. Still, when it goes through the various filters, no warning are raised. And so instead of giving a red light, we give a yellow light, which suggests you that when you have a vast amount of images, you could start your analysis from the one with the red light and then progress downward, okay? Which could save you some time.

Okay. Let us now go to the other topic I wanted to show you, which is the source camera identification. So, in this task, what we want to do is to assess whether this image here, or this evidence image, has been captured by this specific camera exemplar or not. I’m not talking about the camera model. We have already seen that we have tools to understand whether the image is compatible with a certain model of a smartphone or camera. No, here we’re talking about the specific exemplar, okay?

So, I may have four smartphone at the very same model produced at the same day on my desk, and they want to understand whether an image has been captured by one of these four, okay? This is doable, okay? Thanks to kind of analysis, which is called photo response non-uniformity analysis.

Basically, the idea is that every camera has slight imperfections in its sensor, pixel sensor, meaning that each pixel element has a slightly different sensitivity to light, okay? This is almost invisible to the visual eye, okay? But when you capture many images together, you take only the noise component and you average it, you can create a pattern for that camera, okay? Which reflects the imperfections of this sensor. So, this is what our tool allows you to do. You take some pictures with the camera.

Actually you should rather take pictures of a wall, of a flat surface, of a bright sky, okay? Here I’ve put this one because they’re nicer to see. You compute this camera reference pattern and you obtain something which is something like this, if you look at it: not very informative. And then when you have an image that you want to attribute, you will extract the noise, measure the correlation of the noise of the image with this one, and compare it with the threshold. If you’re above the threshold, you will say, “yes, this image is compatible”, okay? With a certain probability with this camera, and otherwise you say, “no, it is not compatible because we are below the threshold”. Okay?

This is very easy to do in Authenticate. Basically what you need to do is to create a folder with your reference images that you capture with your camera. It would be something like that, okay? So, you put all your flat images of a sky, for example, in a folder. Then you load one of those images here, you go to ‘camera identification’, ‘PRNU identification’, and you hit ‘create PRNU reference pattern’. You just tell the software where you want this pattern to be saved, how many images you want to use. (Normally you will use all of them.) And you hit ‘okay’, you wait a few minutes and the camera reference pattern file will be created. (I already pre-computed it for the sake of this webinar, because otherwise we would be wasting too much time, okay?)

And so, there it is, you can load it here. You also get a log file, of course, with the list of images that has been used. And now you have your evidence image in a folder, like this, you can load one of them and it will be compared against the camera reference pattern. In this case, you can see that the peak to correlation energy value, which is a measure of correlation, okay, is 4,458. While the threshold for this…default threshold in the software is 60, which comes from scientific papers, basically, okay? And so we are well above the threshold, okay? Three orders of magnitude. And so we said the compatibility is positive, okay? You can also run the analysis on all the images that are in the same folder. We have 35 of them here.

So, what it is doing now is loading these images and it will try to compare them not only as they are, but we will also try to rotate them, and if they’re not at the same resolution, we will also try to scale them, okay? To try to match them against the camera reference pattern side. So, we check for some geometric transformations. You can also configure an advanced settings file here that allows you to do even a more advanced geometrical search to detect images, even in case of digital zoom, for example. (You can check on our blog if you want more information about that.)

So, this is the result. (Let me hide the full path, which is space consuming.) And as you can see here, we have file names, okay? The compatibility and if we sort it by PC value, okay? You see that all the images that have been marked as positive begins with ‘D01’, okay? Which is indeed the device identification number of the images that I used to create in the camera reference pattern. While you can see that images that are below the threshold comes from different devices here, okay?

Most interestingly, you can see though, that the PC value that we have for some images is much higher than the one we have, for example, for these images here. And the reason is that this images here, as you can see from the file name, this was sent through WhatsApp, okay? And so it was rescaled and compressed. So, our algorithm had to match the scale first, so we had to rescale the image by a factor of two, as you can see here, okay, to match against the camera reference pattern.

And when you upload to Facebook with the…this was available until a few years ago, the low quality, okay? You see that the PC…the correlation value decreases because the image is much rescaled, quite a lot, as you can see, and also strongly compressed. But nevertheless, okay, we are still able to match it. So, despite the images being shared to social media, which is something that removes a lot of information, you may still be able, okay, to use PRNU analysis to link it to the source device. It is less reliable of course than working on original images, but it could still work.

Recently, we have discovered that with recent smartphones developed the last couple of years, you may have false positives with this kind of analysis, which were very unlikely in the past. This is probably due to some of the AI based processing that smartphones are doing today. But we are working on that. We published a paper about this to warn everyone, including our users about this issue, okay? It just means that you have to validate the tool on your specific device model before using it now, but once you’ve validated it and you checked that your model is not affected, you can safely do PRNU analysis even on recent devices.

Okay. You can also do the very same thing but with video, okay? So, you can create a camera reference pattern starting from a video file, and you can use it to check whether a given video has been captured by a certain camera or not. And you can also use it in a Window analysis based fashion. We don’t have the time to go through it right now, but you can check whether pieces of the video have been captured by the same camera or perhaps a portion of the video in some places is coming from a different device, which would be a trace of a video montage, of course.

Okay. Still about videos. I wanted to show you that the shadows tool can also be very useful in the analysis of videos. So, let me get you a video, which was viral a few years ago. It’s loading. I hope it plays smoothly. It’s the famous golden eagle snatches kid video. So, we have the eagle flying and snatches the kid, flies away, okay? And then the kid fall on the ground. Now, normally we say that you should never, in general, you should never take screenshots of a video and then use image authentication filter on it because, of course, screenshots are evil. But even if you export the frame in the correct way, the way videos are compressed is very different than the way images are compressed and processed.

And so you need a dedicated filter for that. But this is not true when it comes to shadows analysis, of course, because shadows, I mean, are physical element, and so if they’re not consistent in the video, it’s good. You can even print something and scan it back, and still you could do shadows analysis. And so indeed, in this case, if you extract one frame from the video where you have the eagle, okay, and you load it, we can do the shadows analysis. Yeah, let’s do it together, okay?

So, I can go here and this I show you before I can draw my wedge. Let’s look at them, and then the head of the guy, and then perhaps the backpack. See, the system is still feasible. (It will be up here.) But then when I go and click on the eagle, even if I am very conservative in my selection, the system becomes unfeasible, okay? You see that the shadow of the eagle should not have been here, okay? But much more down here, okay? And so the system becomes unfeasible. The moment I mark this one as an inconsistent constraint, it becomes feasible again, okay? So, this is suggesting you that probably this is the problematic shadow, the eagle shadows. By the way, I think the shadow of the children is also missing, but it could have been behind the dad.

Okay. One more thing I wanted to show you is about video and double encoding detection. This is a tool we have, okay? This tool here is dedicated to video and it is devote to do something similar to what the JPEG Ghost Plot did for images, trying to understand whether a video has been compressed more than once, okay? So, for example, we could go to this folder here. Okay. Here we have two versions of a video, okay? This is a video that was originally encapsulated in a proprietary video format, okay?

Now, you know, if you’ve been working with the video surveillance and the forensic analysis of surveillance videos, you may know that surveillance video often comes wrapped in proprietary formats that cannot be easily decoded with software, okay? With standard players, I mean. You need to process these contents to extract the video, which is something that our Amped conversion engine does, okay? But in the process of conversion, there is two ways you can extract the original stream. You can do a stream copy, which means that you retain the original pixels, you just change the container, which is the preferred choice, of course, because it preserves the quality. Or you could transcode the video, which means you decode it and you encode it again, okay? Now, if you’re given a video in your hands, okay?

And you have to wonder whether it has been reencoded or not, this tool that we have in Authenticate could help you, could assist you because, for example, in this case, we have this stream copy video. If we drag it here and we run the analysis, this tool will check how the number of Macroblocks, different kinds of Macroblocks in the video, okay? Varies through time, okay? We can see the intracoded Macroblocks, the amount of intracoded Macroblocks in every frame, the amount of predicted Macroblocks in every frame, the amount of skipped Macroblocks in every frame. And there is a paper about that, which tells that when a video is reencoded and an intraframe is reencoded as a predicted frame (or a bidirectional frame), you have a fluctuation in the kind of Macroblocks that is used that we can measure through this VPF signal, okay?

So if you compute the signal and it is peaky and periodic, it means that probably we have a double compression. In this case we see some random peaks around, but if we normalize it, we see they’re very small and they’re not periodic. And indeed the automated analysis is telling you that we could not detect double encoding in this specific video, okay? Which is good news, okay? Because it means that probably we are dealing with the original one. But if we do, again, the analysis on the transcoded version of the video…okay, this time you see that the plot is indeed with some periodic peaks like this, okay?

And if you do the…if you look the automated analysis, it will tell you that the estimation of periodicity, okay, is much stronger than the threshold: 2.54. It’s well above 0.60. And also we estimated period, which is 15. This is telling us that this video is probably being coded twice, and the group of picture size of the previous encoding was 15, okay? So, this is yet one more piece of information about a property of the original file before it was recompressed, okay? So, as you can see, this can also build up your case when you’re working on a video. You may want to check whether there are clear traces of double compression, because this goes against the integrity of the video. It is good to know before using it for evidential purposes.

Okay. Before we say goodbye, I just wanted to tell you that there is much more that we couldn’t go through during this webinar. You have a batch file format analysis comparison, which allows you to compare in batches the main properties of images and create a table that counts the difference between the images, okay? Which is very useful to locate suspicious images in a folder. And you can extract image JPEGs from any kind of file if they’re embedded in the file.

And yes, you have the PRNU based tampering detection tool, and let me show you when you purchase the software, okay? It comes with a samples folder, which I can open now for you, okay? And the sample folder, you can see you have 17 examples that guide you, okay? Through the use of the software, you have the image and its own project file that you can load. As you can see, it’s pretty rich, okay? And you get a description of the various steps of your file, okay? So, you can see it’s like this, and we have lots of comments. If you generate the report, you will see it, okay?

So, you can use the sample folder as a very good way to learn about the software, although we do strongly recommend training for this kind of software, because, as you can see, it is very technical indeed. Yeah, there we go. So, as you can see, we have the table of content with the various bookmarks, okay? (Yes. This way.) And for each of them you have some comment, and you have annotations and so on, so forth. You can color the bookmarks based on this degree of suspiciousness, let’s say, okay? And so you can create a very compelling report like this. You have 17 samples, so you can learn a lot about the software going through the samples folder.

Okay, I think I basically covered what I need to go through. So, for whatever question, you can write to this email address and ask whatever, okay? Or you can visit our website and our blog for a lot of information freely accessible to everyone, there is no subscription for the blog.

Leave a Comment

Latest Videos

This error message is only visible to WordPress admins

Important: No API Key Entered.

Many features are not available without adding an API Key. Please go to the YouTube Feeds settings page to add an API key after following these instructions.

Latest Articles