The Impact Of AI On Video Forensics: Insights From Amped Software

Si: Welcome everyone to Forensic Focus Podcast. Today we have with us our friend of many occasions now and we see Amped often because their stuff’s brilliant. But we have with us today Martino Jerian, CEO of Amped, and we’re going to have a nice, interesting conversation today about some of the work you’ve been doing with relation to AI and regulation and presenting to the European Parliament and all sorts of exciting and amazing stuff that you’ve been doing, getting your hands really dirty at the legislation end of things and where AI is taking us and what problems we’re going to have and what advantages we’re going to have and how you’re going to solve all of this single handedly and it’s going to be brilliant. I’m really looking forward to it!

Martino: No pressure!

Si: No, none whatsoever. We expect deliverable outcomes by the end of this meeting to actually achieve some sort of common sense in the industry.

Martino: Thanks for having me again.

Si: It’s always a pleasure. So tell us, I mean, you have been talking to the European Parliament about things. What exactly have you been talking to them about?


Get The Latest DFIR News

Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month.

Unsubscribe any time. We respect your privacy - read our privacy policy.


Martino: Yeah, it’s been…we did the two meetings, one was in 2023, the other in 2022. I actually delivered a kind of broader document. I mean, AI is just a small part of it, but kind of the starting idea was…there is a lot of discussion, I mean, there’s been for a few years about cybersecurity and AI, you know, face recognition, privacy: usual stuff. But when it comes to video evidence, even without thinking about AI, which is a big deal, there are more, let’s say, foundational issues, in the way video evidence is used. The biggest problem probably is the kind of educational literacy. The fact that…I’m at the point where I bore myself with this thing because I always say the same thing.

Everybody is able to, I mean, take pictures, vacations, watch a video on YouTube. So everybody believes to be a forensic expert. And so sometimes video and image evidence is a bit taken for granted. That’s at the bottom of all the work we’ve done. So we’ve prepared a set of principles, both for kind of very high level institution chiefs, even journalists or laymen people, just to point them into the major issues and topics and things they should be aware of. And then some more kind of technical principles for practitioners. And we didn’t reinvent the wheel.

We kind of summarize together in a document what are, let’s say, the usual guidelines that are available in the community. Like there are these SWGDE documents, the MC document from US and Europe respectively. There are also a lot of regulations in UK. So it’s a kind of, I consider them a kind of trailer, a kind of advertisement for the action guidelines because people should be more aware of them. And I mean, more of a general introduction about the big topics.

Desi: Is that document that you’re referencing there…I was doing a bit of reading in preparation for this interview, was the…that’s the essential concepts and principles of the use…?

Martino: Yep.

Desi: Yeah, okay, so it was like you said, like it kind of covers big topics and we’ll link that for our listeners in the show notes as well if they want to go and download that from Amped’s website and have a read themselves.

And so it’s interesting, I think we’ll get into it as we go through, chatting with you now, but I guess there’s two sides to the coin with AI in terms of, you’ve got the…quite a lot of images that you’ll be getting these days may have been influenced by AI, and I think that’s something that I read in, in one of the articles that may have been on your website that I want to jump into, but I guess the other side of it is the impact of AI in using it in tools, like whether…because we’ve had an interview before, with the deep plate interview, with the enhancement of the license plates, where it was kind of like we talked about how you throw up a warning, like it’s not there to replace the investigator at all.

How do you see that now, like now that we’re having this talk, because that was quite a while ago, but how do you see AI influencing tools now?

Martino: Yeah, it’s…I mean, each image and video forensics is a very broad field. It’s a niche, but very broad niche in the sense that we have all kinds of application we can have the most spectacular one, the mother of all CSI effects: enhancement. We have authentication, we’re very popular with deepfakes, but there are also the old so called ‘shallow fakes’ or ‘cheap fakes’ nowadays, CCM detection, there are image content analysis, “find all images with drugs, weapons”, and so on. Video content analysis, like summarizing videos, “find all instances of a red car in all the videos”, stuff like that. Okay. And different applications have different potential and challenges.

So, I’ve been studying the topic of AI for video forensics for quite a long time. I think there is a blog post that has been quite popular on our blog which is from 2021. And I, over there, I actually outlined a kind of, let’s say, framework, on how to deal with image and video forensics in relation to AI. And I see that that’s been a kind of an inspiration for a few people.

And I also started recently… did a deep dive into the AI act. I read it entirely multiple times. And I see that, of course, it’s much broader, much more generic, but the overall philosophy for our kind of application is more or less in line. Let’s say that’s…the overall idea is use it, but with caution. Sometimes people…I think I’ve been kind of addressed as an AI skeptic, but I think I’m more an AI pragmatic. I mean, I enjoy chatting with the ChatGPT and not trusting him or her, whatever! I enjoy a lot doing pictures with Midjourney, stable diffusion and stuff like that. But, yeah, as all the tools and innovations that are things to be aware of, especially if we play with the freedom of people, that’s very important. Sorry, I was too long!

Si: No, you’re absolutely not. We’re here to listen to you, so you talk as much as you like! You and I have talked previously, and I know that Amped has worked with universities to do research yourselves. But I mean, AI is such an immensely, I’m going to say popular (I was going to say buzzwordy).

It’s very easy to get research grant if you put AI in the title at the moment, and therefore there’s a lot of AI heavy or AI centric research that’s being done. And I know that Amped is integrating some of this, but it’s…how are you guys finding the (I’m going to use this in two different ways, in a sense), but the adversarial nature of the amount of academic research that’s coming out, that’s enabling people to do truly amazing things? But also there’s certain aspects of academic research about adversarial neural networks, which are being able to deal with the technology that we already have that detects, you know, certain things and then…and manipulate it. How are you guys getting on with all of this volume of data?

Martino: Oh yeah, it’s crazy. But at the end, I think…I mean, a lot of research, in my opinion, and it’s always been like that. It’s kind of research for the sake of it. I mean, even when we were studying, let’s say, image authentication and tampering detection without AI, like 10 years ago or more, a lot of paper, you just skim them and say, “okay, they just reason in things that are very interesting in theory, there is a lot of very nice math, but at the end, they are not useful in practice.”

For example, I remember people studying tampering detection with the CFA, the patent, which is on the sensor that was only working on images without compression. Every image has compression, I mean, JPEG, HEIF, whatever. So the same, probably, I mean, you…there is a lot of work to keep up, but it’s a kind of a wave…I mean, there are definitely trends, which are going in a certain direction and you can follow that…there is a lot of things that is useless. And it’s not even…and a bit of a triage. The first triage is easy then testing and understanding what actually works and what not that’s harder, of course. But I think you can always see a kind of a trend of what works and what not and focus on that.

Si: So what trends are you seeing that are interesting at the moment?

Martino: Oh yeah, some of the deepfake detection techniques that we have implemented in Authenticate, for example. So for the diffusion models, which are…we started with the GAN, Generative Adversarial Network Detection, like thispersondoesnotexist.com a few years ago. Now they still exist, but now the most popular ones are diffusion models like Midjourney, Stable Diffusion, Dall-E, and the various similar technologies.

And one of the techniques that we started implementing, we saw people started publishing and there are different variations of course, everybody’s doing their own stuff. But more or less, the idea is that it seems like a bit of cooperation [inaudible] at a certain point things become, I cannot say obvious, but obvious to who is very much inside the community. There is a tool, a technique that emerges and people trying to go into that direction until something better emerges. So that’s more or less my feeling. So I’m not the person directly working on this technical stuff, but I try to keep up. I mean, I have a few Google scholar alerts popping me and try…at least I tried to scheme what’s coming up.

Si: No, that’s very fair. And you said you messed around, you dabble yourself. What’s your favorite image generation one?

Martino: I like Midjourney. I really like it. If you’ve seen on my LinkedIn profile, I’m sharing the principles that we spoke about, each one together with a kind of comics image, kind of summarizing the concept that I did it with Midjourney at the beginning of this year. It took quite some time because to get exactly what you want in the style that you want, it’s not easy. Especially, I mean, you need to accept the kind of trade off, but even with a kind of a graphical designer, you have to communicate. So we have been working with graphical designer before, and we still are from time to time, because you cannot do everything with AI, of course! But, yeah, it’s a lot of fun.

Si: It’s kind of an interesting metaphor really, isn’t it? Actually, the, you know…AI is that sort of preliminary level of doing something, but you still need a professional to come along and actually deal with it at the end of the day. And there’s been a lot of talk about whether, you know, it will do professionals out of a job. And I think what we’re going to see on the sort of design side and even sort of photography sides, I think you’ll get a lot of the poorer quality people will disappear from the market. And yes, it will result in job losses, but the really good people will now be able to charge three times as much to be able to take on the pieces of work that are required to tidy up at the end and to truly do it.

And, AI always, you know, it lacks human creativity. So unless you as a creative person are able to tell it what you want creatively, it’s just going to regurgitate the same old tap that it does for everyone else. So it’s quite an interesting field in that regard. And for forensics, it’s kind of the same in so much as we’ve…it seems to be being targeted at that low level of getting stuff processed because we are suffering from such a glut of data to look at on a, you know, terabytes and terabytes of data coming in, which is manually unreviewable, but that then we’re into those kinds of things. So yeah, I think it’s an interesting thing that you brought it up that way.

Desi: Well, that was actually a nice segue into the question that I was going to ask is that AI is fledgling and based on your research and what your company does, like, where do you see that AI is currently really useful to law enforcement and the criminal justice system? And where do you see it going in the next, say, five years? Versus kind of the hype that is around that is just AI will do everything for us?

Martino: Yeah. I speak about image and video, which is my focus because yeah. I can have ideas on the others, but I’m not that informed. So I think it can be really huge for the automation of what a human could do, given enough time and capacity. So like the image and video indexing, summarizing, search for all instances and things like that, probably for CSAM cases could help a bit diminishing the psychological burden on people to watch terrible stuff.

On the other hand, we must be aware on not over rely on these tools. Because if we just rely on the outcome telling, “okay, yeah, I found all the instances of the car that you’re looking for.” Of course, the AI can, as a person, but probably even more in this moment, can miss stuff. So, I think it’s a trade off, of course. We cannot watch like, 20 cameras for one month and get all the stuff. But how can we verify the fact that all important information has been detected? And either we do a manual review, which is a bit pointless, or we try with another system or change the parameters to the system. But that’s a big problem, getting the stuff done without over relying on it. And it depends on the amount of data.

The other important field is for, yeah, deepfake detection. I think anyways that there are challenges there in the fact that I cannot only trust, at least with the current level of technology, I cannot go to court and say, “I’m sure this is a deepfake because this software told me it’s a deepfake.” Okay. In fact, in our Amped Authenticate, we give you some, let’s say, classification or whether an image has been generated with some tools or not, and the confidence of the network. That is a confidence, not a probability means that as a person, the network can give you a result and be confident to be right, but actually wrong. Okay. I can be sure of something that’s wrong and still be wrong!

Yeah, so the situation is different with the traditional tools like analyzing the image compression in a kind of model based, traditional way. Because if I see traces of manipulation, I’m pretty sure that the image is manipulated. So for this reason, we say, “okay, an AI based image deepfake or tampering detector is not enough to show evidence that the image is tampered”.

You always need to look at the image or the video from multiple points of view from the format, the metadata, the content and the statistics of the pixels. And then, of course, AI tools that essentially have been trained on classes of images, fake, real, from a software, from another software, from a camera. They can always be wrong. And the problem for forensics is that we are not able to explain in the work.

So unless we will move to what is called interpretable or explainable AI for forensics, the use, it’s somewhat limited. Luckily, I think for deepfakes specifically, it’s probably more interesting the opposite: showing that the evidence is original, passes integrity verification. Because that probably what is going to happen more often. I go to court and the defense say, “oh yeah, that’s not my client because that’s a deepfake,” and probably is more common than the opposite, I think. In that case, showing that the evidence is the very original file is somewhat easier than reliably detecting that a deepfake that may have been on social media, transcoded, converted multiple times is actually a deepfake or not.

Si: I’m sure you’ve…I’m a hundred percent confident you’ll have heard of Leica’s embedded authentication stuff.

Martino: Yeah.

Si: Do you think we’re going to see…Sorry, I’ll clarify that for everybody else now: Martino knows what I’m talking about. Leica is a camera manufacturer, an incredibly expensive one that I can’t afford, and they have decided to embed into their raw files when you take a photo, a watermark, a digital watermark or a digital checksum or a hash or something, I don’t know exactly how it works. But effectively that to show that a picture is authentic at the point it’s been taken, effectively moving that hashing process that we all do right up to that image creation process. Do you think we’ll see that become more common for cameras because of…and video cameras as well, because of the idea of deepfakes, or do you think that’s just too processor intensive and too complicated to do cost effectively?

Martino: I think they also try to embed it in some Android phone a few years ago. They included the…one of these [inaudible] in the chip. So, on some phones (I don’t remember the brand) there was kind of, you know, as the settings that you have for a photo, like a portrait, landscape, yeah, panoramic, there is also secure photo, which embeds this stuff. I mean, it gains ground. It would be very important, of course, and would be the best way to tackle the issue, but there are some challenges associated with that.

First of all, as any technology, it can be hacked. And in fact, I remember showing up at my very first trainings in Authenticate and many years ago, they were, I think, both Nikon and Canon, at the beginning they launched some similar kind of digital certificate for handheld cameras, like 15 years ago or so. And the next day pictures…their method has been exploited. And I think it was one of the…what was the name? I think it was Elcomsoft, if I’m correct. They released on their blog a picture, for example, of the…some obviously fake images like the landing on the moon with the Russian flag or stuff like that, that was passing the Nikon digital signature verification. So everything can be happened.

The second thing is that, of course, unless we reach 100% coverage in device that produces this, and also in some way, we embed these in the various tools, social software that modify the picture, because at the end, how often are you seeing an original image from your phone? Most of our picture are coming from, yeah, websites, social, friends sending through WhatsApp.

Only the pictures that I produce are actually the original version normally, unless you are very careful about transmitting the original. And there is a third point which is very sensitive. The fact that nowadays, there is also a lot of AI processing in our smartphones. So even though the image can be original, there is a lot of computational photography. And there are a lot of tricks that you can do even in real time. So, it’s a lot of challenges. And in general, the idea is that you need to understand technology, how images form and understand what are the risk associated to each situation.

Si: So, the computational photography one is particularly interesting, I think, because there’s no intent to manipulate the image. You know, somebody takes a photo of something happening in the street, and all of a sudden it automatically becomes a manipulated image purely because it’s been taken on an iPhone with those settings by default. How critical do you think it is to have an understanding of that when you’re giving evidence in court? I know you guys do a lot with training, when you’re giving training, to help that out, but is that something that you guys are addressing at the moment?

Martino: I mean, we mention it, but we’ve seen, for example, this computational photography impacting the PRNU so the sensor noise that present on all images and allows to do a kind of camera ballistics. I think the point is that the situation from a certain point of view didn’t change. I want to make a practical example. Even before AI computational photography and everything, there was still a lot of processing happening in the camera, doing white balancing, interpolation and stuff like that, before you can actually have the image you’re seeing. Okay.

So one thing that you can see very often is artifacts due to image or video compression. So many cases we have been dealing with where, I mean, you can somewhat get the number of a license plate and you are guessing, “is this really a license plate or an artifact that my eyes are perceiving like a letter because I expect there to see a letter?” Or, I mean, there has been cases with kind of scars, moles on the face. Can you trust this or not? And it was for a technical reason. Usually of compression or many other artifacts can happen on the camera. Even the simple fact that depending on the length of your optics, the shape of the face of a person can change. Okay.

And these are all things that we should be aware of. So from that point of view is very similar with AI. The problem with AI is that when you have a low quality image. You have some perception that…there is not much you can rely on. With AI, even if you lose this, kind of, idea that the quality is off, so you cannot trust much. It kind of makes up details, improve things, and actually your perception is that of a good image, even though a lot has been kind of added or could be similar to the original, the hypothetic original, I mean the actual scene or different, and there is no way to know.

In fact there has been reading recently a paper where they suggested a way to embed into the chips or the firmware of mobile phones, putting together with the image generated a map of the areas that have been changed during the AI post processing inside the camera. And that would be quite interesting. Not sure if it’s really feasible or of interest for the big producers, though.

Desi: I wonder about the use, because we spoke briefly at the start with, I guess the magic of AI enhancing images, and obviously that’s not great from a legal standpoint because we can’t explain it. From any of the research you’ve been doing, have you seen that move or shift of desire to make more explainable AI? So that…because I imagine that’s going to be the first step in being able to use AI in a process that could be used in a court of law for forensics, is you need to be able to explain it first. Is there any kind of uptake in the community for any of that? Or everyone’s kind of like, “let’s just make all these large learning modules and not worry about trying to explain it.”

Martino: I didn’t see a lot of work on interpretable, explainable AI for enhancement. Maybe I missed a bit, but I think the kind of trend has been, okay, you see this amazing project with this, to do everything. And I shared a few examples of things that can and go wrong there. Even there was one of the various tool, really impressive results on their webpage. But even there, you see that they completely made up the license plate of a car and also they transformed the logo of a car that was I think a Nissan was transformed in the logo of a Dacia or the opposite. I mean, I’m not a car expert, definitely! But I remember it changed visibly in the low resolution image. You see one brand and the high resolution, see the other brand, which was pretty funny because it was their demo.

And so I think the forensic community…I mean, who was take things forensically pretty much agrees that at the current moment doing the image enhancement with AI is not a good idea. As in the blog post I mentioned before, I essentially did divide two situation: when you’re using it for investigations or when you are using it for evidence. Okay.

So for investigation, just when you don’t have any clue about who can be, do some test, even with AI enhancement could work if you don’t abuse it, if you are sure that it doesn’t become evidence because, I mean, sometimes between evidence and, let’s say, intelligence that, I mean, the border is very blurred sometimes, and also people does not rely too much on it, because they need to be aware of the risk. And in fact, if you remember about talking about the deepplate.

Even though we could provide the announced image, we didn’t because we don’t want to buy us the person into believing something, which is actually, I can say that AI enhanced image, technically speaking, is a deepfake because that’s not a processing of the original image, is a new image created taking the input image as an example, and then to that example has been applied a huge data set of what the network has been trained on. So that’s the complicated stuff.

And a few months ago, there has been a big case in the US. There was, if I remember correctly, was…I don’t remember if it was at Robert or Fry State. There was a hearing where essentially they disqualify evidence that there was a process enhanced with AI. And for various reasons, first of all, the fact that this technology has not been accepted and validated by scientific community, they disqualify the evidence and say, “we cannot use this.”

Desi: Yeah, I do remember reading that. I just can’t remember where I read that.

Martino: In the Washington State was…the most common article is on a NBC, I think.

Si: Yeah. I mean, I think it’s kind of interesting because if you actually think about it, it becomes somewhat self evident, is that if there’s not enough data in the picture for you to read it, if more data has been added so that you can now read it, data has been added and if we were to go along and say, “well, you know, if this was a computer and all of a sudden we’ve added, you know, some CSAM material to it so that you can find CSAM material,” we would all be going, “that’s totally ridiculous.”

So it’s getting that understanding over is a challenge. Do you think that from the conversations you’ve been having and then you’ve been talking about, you say, you know, since 2021, you’ve been talking about AI and you talk to wide people about video evidence. Do you think there is a better understanding out there of it now? Or is it…are people still as ignorant as they were four years ago?

Martino: No, I think it’s improving. It really depends from, let’s say the current context of different countries. I mean, for example, I think in UK there have been a lot of challenges, but also improvement in the situation with all the problems with the ISO certification and the…I mean, it becomes more complicated to work definitely, but also things are taken more seriously, so there are pros and cons. And it really depends on the country, on the organization, on the individual. And I mean, in one of these articles mentioning about, speaking about this case where AI enhanced evidence was disqualified, some of the people interview commented, “oh yeah, it’s these technologies use all the time, just people don’t tell.” So that’s a bit worrisome. But overall, I think the situation is improving. Yeah.

Si: I think it’s a wonderful topic, and I could sit here and talk about it forever. I actually read artificial intelligence at university and having had a conversation with you about how old I was earlier on, you can figure out how long ago that was. And it’s interesting to a certain extent how little things have changed. And yet how much has changed. The only real difference is the amount of stuff we can get through per second on a processor, but the principles actually haven’t changed all that much since I started.

It’s going to be a continuing problem for us going forward. In other areas as well, because we know we’re talking about video and image and obviously because that’s your field. But I heard somebody else telling about all of the problems about audio. And in fact, audio seems to be somewhat easier to do. I mean, I heard about a case whereby, you know, somebody got phoned up, or got a phone call and heard their daughter’s voice on the other end of the line pleading for money to be sent because they’ve been kidnapped, or something along those lines.

And it had just been sort of…an AI had been trained from so…and to be honest, you know, we’re all lining ourselves up with these lovely podcasts that are putting our voices out to the world with huge buckets of training material out there. So anybody if you get a phone call from me, it’s not me. I don’t use the phone! You’ll get a text message or something. Not that I’ll prove it! But you know the range and creativity of people in using stuff is getting wider, so we’ve got the idea of deepfake images and deepfake audio, we’ve got the lining up of that defense case that…it used to be the Trojan defense, “it wasn’t me, it was a virus.” Now it’s “it wasn’t me, it was deepfake.” Have you seen anything truly innovative in the way that people abuse image AI yet?

Martino: No, not really, but I want to comment on the fake voice of the kidnapped lady with a very recent case. Maybe you heard about the…that involved the CEO and an executive of Ferrari, the luxury car brand. And it was pretty interesting because these executive, when he was a requested, I think, to sign a contract or something very urgent, of course, he something was ringing a bell. So he asked him, this supposedly CEO, something about a book he recently recommended to him. So he verified that it was a deepfake, not his real voice. So this highlights the importance that no matter what the technology, everything, the education and the human is always, I mean, the strong or the weakest string in the chain.

Si: Yeah. I hadn’t heard about that. We’re going to go and look that up afterwards so we can share that in the show notes. That’s brilliant.

Martino: Yeah. It’s from last week, I think.

Si: Oh, perfect. So yeah, and actually that comes around to the full circle to what you were saying earlier about the difference between authentication of an image is an image authentic, is it true to what it purports to be? Versus, edited. Because obviously you can edit an image because we do the enhancement on them and bring them up to scratch and that authentication step is all about that chain of custody and all that sort of thing, which would be lacking from deepfake, kind of…well, yeah, so no, it’s an interesting thing. So where do you think we’re going next? I mean, in terms of Amped, have you got some cool new AI detection and/or use features coming up?

Martino: Yeah, AI is a big part of our research and a small part of our products because we like to understand and test and see what it’s worth, what is risky, what is acceptable or not. Maybe you’ve seen, we did a research that we published a few years ago about the…how useful is enhancement to recognize faces. We tried with celebrities. Okay.

So we announced it with AI or with the classical algorithms like bicubic and see which one was more successful in having people recognize these celebrities. And basically, the result was the same. So because in some cases, okay, it actually improved the quality. In other cases, it improved the quality, but creating another person, people who was not even able to recognize the celebrity that everybody should know. So that’s probably the biggest part.

I mean, studying and researching and on various aspects, like not only the technical, but also the regulatory one. I think we shouldn’t underestimate the AI act because it’s very constraining. As I told you, I read it and try to study it entirely focusing, of course, on things that are of interest for our field. But, yeah, if it’s implemented, deepfakes to an extent from a lot of people should not be a problem anymore because they should embed a watermark in all the system that creates.

Of course, that will not be universal because they cannot control open source free stuff, stuff like that. On the other hand would be very [inaudible] by anybody else, there is a lot of compliance to do if you are inside the specific cases, which is called the high risk cases, which some aspects of law enforcement belong to, not all. In that case, there is a lot of paperwork you have to produce, which describes how the AI is supposed to work, how it’s supposed to be used, what are the risk and a lot, a lot, a lot of information about the data sets, which is the biggest problem with AI now because you need to show which data set you use that they are being collected in the proper way.

And I mean, most of the system that we see now use data that is being scraped without an actual clear legal consent regarding privacy, copyright and stuff like that. So, I think it would be interesting to see how this thing evolve. I mean, we can create great tools, but if…I mean, the AI act does not allow us to use some data set, it will be very difficult to make them evolve. Yeah, there are still a lot of gray areas, but it’s a significant challenge and it’s an important thing to regulate because you cannot just scrape images of people randomly without any permission.

Desi: I’m wondering with the…like, we’ve covered quite a lot in this and we’ve had multiple kind of interviews with Amped and the technology and AI with other people as well. There’s so much information out there and so much to learn. So where would you say for forensic experts, if they need to stay on top of this stuff or need to start their journey on learning AI and obviously being aware and stuff, where is the best bang for buck for them to kind of start upskilling themselves, whether that’s…they’re part of your ecosystem using your tools or, or just interested in general and need to stay on top of it for their job?

Martino: Ask ChatGPT. No, I’m joking!

Desi: Use AI. Excellent!

Martino: Yeah. I don’t have a single source. I actually spend a good part of my day reading. Yeah. I subscribe to various RSS feeds. There are interesting stuff shared by people on LinkedIn. There are, of course, colleagues sending me stuff. I set a lot of Google alerts, so I cannot point to a single source of truth. It’s a lot of work, definitely.

But the problem is usually understanding where is actually this potential and still there is people who say, “oh yeah, AI will eat all the jobs and we’ll do everything, we won’t work anymore.” And there is people, “yeah, it’s just a bubble, AI is not able to do anything.” And probably the reality is in the, middle as usual. And of course when you are getting information you always need to be careful understanding if you are reading from one side or the other and interpolate the reality, and that’s not easy. But it’s important, I think, to be pragmatic and not to just be in a bubble where you just believe what you want to believe as usual.

Si: I’ll help you out on that one. Obviously, you know being incredibly modest, the Amped blog is a fantastic source for all sorts of discussions about AI and using images and video forensics at least, and I know that you reference and have been consistently excellent in referencing the academic papers that you are talking about. So that would be a good place to start if you are truly new. And it’s a good place for us all to go and keep on up to date with. So I’d certainly recommend that one. And we’ll put a link in the show notes.

Martino: Thank you. Also a lot of…I share more, let’s say regularly and shorter bits on my LinkedIn profile. So, most of these news I mentioned, like the Ferrari, the Washington case, and I usually comment over there because, I mean, they are not worth a long blog post, but still interesting. So if you go to my LinkedIn profile, you can see a lot of these small bits here and there.

Si: We will put that into the show notes so that you can be bombarded with connection requests of…

Martino: Fake people, because that’s also a problem!

Si: Yes!

Martino: You know how so many, it was like from two years ago and actually I was testing Authenticate on connection requests because there was a period where we had all these images done with thispersondoesnotexist.com or similar, then there was a period where I got like three connection requests per day from full stack developers, which…all more or less with the same kind of bio, not the same, but similar and always a picture of that with diffusion models with…now I think LinkedIn did something because they have less. Not the zero, but less!

Desi: Yeah, I did go through and read your blog on can AI be used for forensic investigations, and that was an awesome one to read through, which is one of the others that we’ll link. And it has a very nice in a nutshell summary. That was my favorite part about that, because I got to the end and I was just like, “wow, this was like a lot to take in,” and then it kind of summarized it all again. I was like, “oh, this is all the stuff that I learned at the start. I don’t have to go back and read it twice!” But yeah, a lot of good stuff on here. So, Si any more questions from you, mate?

Si: No, I don’t think so. I mean, I hope honestly that you’ll come back and talk to us again in, I don’t know, given the pace of change, probably six months would be more than long enough for things to be completely different, and for us to have an entirely different worldview. I mean, there’ll certainly be another election that’s have happened and a whole new bunch of fake and/or not fake news that we’ll be trying to filter through.

And of course, you know, these events that cause usage and detection then cause enhancement and refinement, and we’ll see things move on politically and in world stuff. I mean, it’s all exciting here at the moment in the UK, where…obviously again, making this is how the sausage is made, but you know, we’re currently experiencing riots here because of fake news, which is kind of scary. So yeah, but in a few months, we’d love to have you back and to talk about this and Amped and all of the work that you’re doing with legislation and video and image evidence going forward. So it’s been an absolute pleasure having you thoroughly look forward to having you again in the near future. My cat is now disturbing. (Excuse me. We’re in the middle of a podcast, please stop!) So that would be wonderful. So Desi, do you want to take us out because you seem to remember where all the podcasts are and I never do.

Desi: Yeah, which we should always put up the front, but thanks everyone for coming and joining this week. We always love having you come and listen to our podcast. You can grab it from anywhere you get your good podcasts from. You can watch the video on the YouTube and also our website, forensicfocus.com, where you should also be able to grab a transcript of the show as well. But we’ll put all the show notes and links kind of on all the platforms, so you’ll be able to get all the stuff that we’ve been talking about, get to the blogs and of course Martino’s LinkedIn, whether you’re a real or fake person, for sure. So, thanks everyone and we’ll catch you all next time.

Si: Cheers.

Martino: Thank you! Thank you. Bye.

Leave a Comment