Understanding The AI Act And The Future Of Image And Video Forensics

The following transcript was generated by AI and may contain inaccuracies.

Martino Jerian: I’m Martino Jerian. I’m the CEO and founder of Amped Software. I’m an electronic engineer. It’s important because this is a pretty legal presentation, but I also have former experience as a forensic expert, of course, in cases related to images and videos. And I’ve been a contract professor at various universities, but now I’m fully focused on Amped Software, as you probably know.

And yeah, about us – we founded the company in 2008 in Italy. And since a few years ago, we also have an office in the US. Our software is used by law enforcement and government agencies and private labs all over the world for working on image and video forensics. And our vision that stands behind everything we do is the concept of “justice through science,” which I think is very important and related to the content of this webinar. And here in this beautiful picture, you can see the entire team on the top of the mountain at our AMLO meeting that we’ve done in January. So it pretty much represents our mood.

Okay. Why this presentation? As you probably know, unless you are living under a rock, AI is everywhere or almost – not very much in our software yet, and for a reason. Law enforcement applications are a big part of the Act, a very big part, and we as software vendors, we develop software and from this point of view, we are subject to the Act. But also you, as I assume most of you are our users, are subject to the AI Act, and you should be aware of potential risks of using non-compliant technologies, or also when you are using our technologies, what are the things to be aware of?

It’s also important for non-European organizations. I see in the participants a lot of names of people I know from outside of the European Union. This is pretty important because the AI Act is a European Union law, but such as the GDPR privacy regulation, if you are from outside Europe and you are working with customers in Europe, or you treat the data of European citizens, you need to be compliant with it.


Get The Latest DFIR News

Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month.

Unsubscribe any time. We respect your privacy - read our privacy policy.


The fact that you are not in Europe doesn’t exempt you from respecting it in those instances. And also, we expect, as the GDPR – the privacy regulation – has been copied, not copied, but of inspiration, let’s say, in many other states and countries, we can probably expect something similar to happen for the AI Act.

As you’ll see in a few minutes, non-compliance fines are huge. So what’s the objective of this webinar? First of all, it’s a big study you may have seen on our blog. I will share the link at the end of this presentation. I did a lot of work for our kind of personal use as a software company, to understand what of the activities that our users do are subject in some way to some of the regulations of the Act.

And again, a big disclaimer: I am not a lawyer and this is not legal advice. It’s my reflection – my reflection on a very long and complex law and yeah, as such, maybe this webinar will be a bit different than typical webinars from us with a lot of nice enhancements, license plate examples or other hands-on software. So it’s quite a bit dense, I would say. But of course, you can watch a one-hour webinar or read 150 pages of the law as I did multiple times, so you can choose.

Okay, the big marketing of the European Union says this is the first regulation worldwide, and it has been advertised a lot like this. And I think this is a common way of saying that Europe is an innovator in regulation and a regulator of innovation. And I think these two definitions are similar – they are pretty much on spot, and we’ve been the first, we keep this – we started probably.

Okay. First of all, we start with a very brief overview of the AI Act in a nutshell, as we see here. So what is it? It’s a law – the European law of about 150 pages. So there’s a lot of stuff. It’s been published in July 2024. If you have been following my LinkedIn account, I shared multiple times because that is where the news – it has been approved many times because actually the approval happened in multiple steps. So again, it’s the fourth, fifth time that we see the news about the approval, but the final one, the real one was July 2024.

Most parts will be compulsory by 2026 and 2027, it happens in steps, some parts are already, let’s say, applicable, as we’ll see later, and there are some exceptions. It does not apply to use within national security, the military, research, and partially for open source software. And it’s pretty interesting from our point of view because some of our users are borderline with some of these, so sometimes it is a bit difficult to distinguish where law enforcement and public safety finishes and national security starts.

It probably depends on the kind of organization and activities, but sometimes the lines are blurred. And the penalties are very big because the penalties for non-compliance can be up to 7 percent of the global turnover of an organization or 35 million euros, whichever is the greater of the two.

And it’s important that this is not profit, but turnover, and it’s global, not only of the kind of European, let’s say, headquarter of a company, but of all the offices around. So this can potentially make a company default. There are some categories at a very high level defined in the AI Act. First of all, there are the prohibited AI practices that are, of course, prohibited – they can’t be done. Then there are what they call the high-risk AI systems, and they are – they can cause some risk from different points of view. So they can be, let’s say, used, but according to some compliance requirements that we’ll see later.

Then there are what are usually called low-risk AI systems. Interestingly, they are not explicitly defined. There is not a definition, or the low-risk AI systems are not even mentioned in the AI Act, but they are implied by difference. Anything else that is excluded from the other categories is low risk, with the exception of, let’s say, what they call in the law “certain AI systems” with some definitions, let’s say, and they can somewhat be approximated with the generative AI tools like those that were used to create text with AI, create images, videos, audio and stuff like that. And they have some transparency obligations that we’ll see later. And finally, what they call the general purpose AI that are essentially at the core of many popular applications that can do many different things, and they need also to adhere to some rules.

Going through the law, we will go through some important definitions. First of all, the first article, the purpose – you will see over the presentation, the italic font. This means that this is being copied and pasted from the law. And I highlighted some important words. Essentially this defines the overall idea behind the law. Here you see that very much in line with the European Union fundamental values.

Here, the objective is to have human-centric and trustworthy AI, and above other things, the objective is to preserve the fundamental rights, democracy and the rule of law. And then there are many other important things – safety, health, environmental protections, but essentially a good part for this reason is important for our field. Law enforcement use is a relevant part of the law.

Second big, let’s say, definition is AI system. It’s pretty difficult and they define – I put different points just to make it clear. Actually, it’s a single sentence in the law. It’s “AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as prediction, contents, recommendation or decisions that can influence physical or virtual environment.” It’s pretty bad.

I think everybody can – at the beginning, I was trying to study this and see, from this, even normal software could be here, but then luckily, they released some guidelines about what is specified better, what is considered an AI system.

So it’s a multiple-pages document that goes very much in depth with examples on the points that we’ve seen before, but essentially the important thing is that more clearly it defines what we normally consider AI in general, even though there are many different kinds of AI, not only generative AI, which is, or deep learning, which are the most popular nowadays.

So essentially a critical aspect is the fact that is right here: “AI systems should be distinguished from simpler, traditional software systems or programming approaches. It should not cover systems that are based on the rules defined solely by the natural persons.” Okay, so this means that with software, where it is the human that programs the rules, this is not AI – to put it in a very kind of informal way – it is when the system, usually with a data set, learned what are the right parameters and rules, basically learning from data.

Then there are definitions of the various subjects that are involved in the AI Act. So normally I call ourselves vendors. In the AI Act, they call us providers. So in our case, developers of the technology, and then there are end users, and users over the law are called deployers. Okay. And then there are others. Others are normally called operators, and it’s other entities. It can be the provider, the deployer, but also manufacturer, representative, and importer distributor.

So all these in general are called operators, but the vast majority of the things we will focus on will be provider and the deployer. So we are providers, you are deployers, and then there is a definition, very precise, about law enforcement, and they define this like “any authority or competent for prevention, investigation, detection or prosecution of criminal offenses or the execution of criminal penalties, but also any other body or entity that has been assigned these duties.”

What does it mean? That, according to my interpretation, of course, what we will see over the law about law enforcement is actually applicable also to private forensic labs exactly in the same way, because these private labs we assume are being assigned by the public authority to do this kind of job as well. Another definition that I pretty like is deepfake.

I usually write deepfake altogether without the space, but they prefer this form and it is the way it is. And they define it like this: “deepfake means AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events, and will falsely appear to a person to be authentic or truthful.”

Interestingly, here, we see that it’s not much a technology – of course, it’s done with AI-generated or manipulated, but also the context in which we evaluate this image. And I think it’s pretty much in line with the definition of the SWGDE, Scientific Working Group on Digital Evidence, the definition of authentication, which is the process of defining that the data is an accurate representation of what it purports to be. Okay. So again, here, there is not much about the technology – it’s just doesn’t really represent the truth.

And so we can make a couple of examples. Okay. So probably, Midjourney – it’s one of the many applications where you can do text-to-image. Okay. So I asked Midjourney to “create an image of a night-time realistic photo of a group of a cop looking at himself in the mirror seen from the side.” And I get this image, which is pretty much, yeah, realistic photographic style.

So if I pretend this image to be a picture, a real photo of a real person, this is a deepfake. Okay. But then I use the same technology for doing this, a drawing in the style of a seven-year-old dinosaur riding a motorbike with some other technical features, okay?

This is clear that it’s a drawing. I’m not pretending that a real dinosaur is riding a real motorbike, okay? So it’s not a deepfake. But if I pretend that this drawing has been done by my seven-year-old son, for example, maybe this is a deepfake because the context is different. Probably we can discuss on that – it’s a very philosophical thing. But this is just an example of the idea behind it.

Over this presentation, we will evaluate typical image and video forensic activities and see what the AI Act has to say about them. This is a very generic application, even a video search and content analysis. So a video summary, find all people with a blue t-shirt or the red cars in a video, find all explicit videos, pictures with drugs, guns, so content analysis, okay? This is one possible application that we will study.

Then face recognition on recorded video. Typical question: Is this the same person? This is me, believe it or not, many years ago. Yes, it is the same person. And this is a typical question that can be probably solved by AI, but should it be?

Then, license plate recognition on recorded video. You may have already played with our tool DeepPlate that from a very low-quality image, it estimates what are some possible character combinations recommended for investigative use, of course. So this is another topic that we are investigating.

Then image and video redaction, pixelating, blurring, putting a black rectangle over sensitive details, another very common practice.

And then there is image and video authentication. For those of you who are familiar with Authenticate, you know that we provide many different tools inside it. There are traditional tools based on traditional image forensics papers totally unrelated to AI, but since a few years, we also added some AI tools to complement the traditional ones because it’s pretty hard, even though not impossible, to fight AI without AI.

So this is our image generation tool. So you see on the left my colleague Marco, which is clearly not an image generated with GAN (Generative Adversarial Network), while the other person has been created with the website thispersondoesnotexist.com and of course it’s detected as such. And then we have our classical image and video enhancement. Typical example of a blurred license plate. You already know everything about it, probably. So this is, of course, a very important topic to investigate in the light of the AI Act.

And then let’s go into the depth of the AI Act. Now we’ll compare this list of typical activities with what the AI Act says that is prohibited. Will any of these activities that we do be prohibited to be done with AI? Let’s see.

So what are prohibited practices? And I simplify them a bit. Of course, there is the law. If you want to go into the very nitty-gritty details of it, behavioral manipulation – so using AI to unconsciously change the attitude of people, social scoring – evaluate the behavior features of people, predictive policing – it’s partially prohibited, not in all instances, but some parts are prohibited, emotional recognition in work and education – it’s prohibited, biometric categorization with some specific purposes that we are not going to go much in depth here.

This is interesting – scraping of facial images for face recognition from CCTV or the Internet. This part of the law, I think, has been written with a specific use case in mind. And I read up the name of the company, but there is a pretty well-known company that created a database of faces scraping Facebook and other sources and that have been fined for a million dollars/euros by many European countries because it’s totally against our privacy regulations.

So they did put this in the AI Act too, and then law enforcement use of real-time biometric identification in public. The keyword here is “real time,” okay? Doing forensics, we are not much interested in real time, but on recorded video.

So there are some exceptions to this prohibition, okay? Oh, yeah, by the way, these are already enforced since this February, okay? This stuff is already forbidden. You cannot do it in the European Union. What are the exceptions for the real-time biometric identification? The most common application is of course, face recognition – cases of abduction, trafficking, sexual exploitation, or missing persons, imminent risk for life or terrorist attack, and identification of suspects in serious crimes that are punishable with detention of at least four years.

Okay. This was a big part between different member states of the final negotiation, because someone wanted more power to investigate. Some were more protective of privacy. So it was a big discussion over the last time of the last part of the negotiation for the law.

So interestingly, let’s go over very quickly our topics: image and video search and content analysis, face recognition on recorded video (on recorded because the other we’ve seen is prohibited), license plate recognition on recorded video, image and video redaction, image and video authentication, image and video enhancement.

These are not prohibited, so first step – it’s okay. Then let’s see if some of these activities are under the high-risk category. These are defined in the Article 6 and Annex 3. Putting together, it’s a bit complicated, but it took some time, but I did it. So what are high-risk AI systems? Safety component for some products like cars, biometric identification – not verification, but only identification, biometric categorization in general, or some specific instances are prohibited.

We’ve seen also emotional recognition prohibited in work with education, but in general high-risk, critical infrastructures, education, employment and workers management, medical devices, access to services like insurance, banks and stuff like that, law enforcement and border control, justice and elections and voting. I highlighted here the part of interest potentially for us.

So what I did, I went to study those specific particles. So the first is biometrics. So I think the first thing which is written everywhere, it’s “insofar as the use is permitted under relevant Union or national law.” This means that the AI Act is not the only law that we have in Europe or in other member states. So maybe according to the AI Act it is allowed, but there are many other laws to consider. That’s important. This does not supersede other laws. And in general, are remote biometric identification systems. Okay.

So what does it mean? That face recognition on recorded video is considered a high-risk activity. Again, on recorded video, on real time, it’s prohibited. Okay. And remember it’s not just recognition in general, any biometric identification system – of course, face recognition is considered the most common and critical from this point of view.

Then we have law enforcement as a category again, if the use is permitted under other laws. So the first point, which is very interesting, is “AI systems intended to be used by or on behalf of law enforcement authorities.” Then other test is “polygraphs or similar tools.”

I studied a lot of these because I think probably we need some more precise definition if it’s only related to polygraphs or there is more, because if we interpret this literally, it can be much more, let’s say, much wider. But it seems to be into kind of lie detector stuff. And then this is also interesting: “AI systems intended to be used to evaluate the reliability of evidence.”

And what is an example of this? Image authentication, of course. So these we will check later. But essentially, we already have a hunch that it’s about deepfake detection or in general, image and video authentication done with AI since it’s to evaluate the reliability of evidence. And then we have another section, which is border control, migration, asylum, and border control management.

Again, there is the section, the same as polygraphs or similar tools. And then we have “AI systems used, blah, blah, blah, for the purpose of detecting, recognizing, or identifying natural persons.” Okay? So this is very wide. And it’s interesting that essentially, border control is also – someone that implies – it’s related to law enforcement, but they have stricter rules. Okay. So this is pretty important for our analysis.

Then there are justice and voting. And so I went here, since what we do is related to justice, but essentially the only part that could be somewhat related is this one, “to assist the judicial authority in researching and interpreting facts in the law and in applying the law to concrete set of facts.” And this is what we call the robo-judge or the AI judge. So it is not related to what we do with videos essentially.

So after this deep dive into the high-risk activities, oh, yeah, there is the derogation because even if you are in one of those cases, you may not be subject to it if the AI system is intended to perform a narrow procedural task, so it’s not doing the entire job, but just a small part. If it’s done – if it’s used to improve the results of an activity completed by a human, and/or if it’s a preparatory task to an assessment that is done in other ways.

Okay. If you think you’re subject to derogation, you need to document and do an assessment before placing the system on the market. Okay. And this is something I discovered pretty recently. On the first analysis, I didn’t notice this. This is interesting. There are systems that are so-called “grandfathered.”

What does it mean? That if there is a high-risk AI system that has been put on the market before the 2nd of August, even if it’s a high-risk system, it does not need to be compliant unless there are big changes to it. And unless it’s been used by public authorities, then you have time until 2030 to become compliant. This is pretty interesting.

Okay, so our typical image and video forensics activities, high risk? Let’s see one by one. Image and video search and content analysis – in general, this is very wide, could include some of the various activities. But in general, imagine a video summary, find all cars and stuff like that – is not a high-risk activity. Maybe an exception in the context of border control, but likely would be derogated as a preparatory task, like search for all the cars and then the human investigates more. And this is for recorded video. Real-time analysis again can be a bit more problematic, especially if it’s done for profiling, which is an entirely different matter.

Face recognition on recorded video – it’s a high-risk activity. Very clearly, also other biometric IDs are a risk and let’s remember that real-time biometrics and scraping of face recognition database from the Internet are also prohibited. Very important.

Then license plate recognition on recorded video – it’s not a high-risk activity. There isn’t anything written around there, but there has been a nice paper written by Larget at others in 2022. They work on a draft of the AI Act and they came to similar conclusions to what we do in many aspects, but they had a different idea about this in the sense that they think that also other systems like license plate recognition or other kind of photographic comparison should become high-risk because they are – they can be used for identification.

Essentially, we can tie a license plate to an individual that owns the car. So they expect this to be considered high risk, but there is nothing written in the law as far as I’ve seen, that can do this. Of course, things can change.

Image and video redaction, of course, as expected, is not a high-risk activity. Image and video authentication, yes, because we already seen it’s probably a high-risk activity when done with AI, of course, not with other techniques, because it’s used to evaluate the reliability of evidence, okay? And also, the authors of the mentioned paper agree on this.

Pretty clear again. This is more part – if you think about our use case, because in Authenticate, we have many other tools. And in any case, the result, the final analysis, it’s put together by the analyst – it’s not that an image is automatically classified as fake or not by an AI. And that’s it. It’s just a tool that can, that is being used by a human. So maybe some derogation may apply in this case.

Then we have – I’m pretty big on image and video enhancement. According to this list, it’s not a high-risk activity. Maybe it could be high risk when done with AI. Again, traditional algorithms, for example, those that we have in Amped 5 are not based on AI. So we are not even remotely discussing that for now. But maybe it can be a problem in context related to border control, maybe derogated, but that’s a longer discussion.

Okay. But one thing about AI enhancement I want to mention is that it’s pretty risky. I’ve been discussing these over and over. This is an example that I show in other webinars that I think is pretty impressive. It’s on the homepage of this tool that is linked here. It gives impressive results. Amazing that are good for your vacation photo, definitely. But if you think about this as evidence – you can very clearly see that it’s changing the brand of the car when enhancing it, as it’s totally making numbers and letters of the license out of nothing.

If we are working with evidence, this is very risky because it can create something that looks like a legit image, legit high-quality image, and where we can put our trust. But it’s actually not, because it’s an image that has been AI-generated and AI-manipulated. It’s very risky and very interesting. It was this case – it is somewhat recent. It’s about one year ago. So far, there has been the first big case in March 2024, where some videos have been disqualified in court in the US because they’ve been enhanced with an AI tool.

Essentially as the law, based on previous cases, works in the US this sets up pretty strong precedent. So AI enhancement is not acceptable as far as these – it was, if I remember correctly, a Frye hearing, several experts were called to testify on it. And also after these, there were quite a few interviews with experts, the field of discussion on legal journals. And it was pretty clear that for various reasons that were mostly legal at this point, and the acceptability of the science, not kind of a pre-conceptual things about AI – in any case, this was not deemed acceptable. And I pretty agree with that, from my position.

What are the requirements for high-risk AI systems now that we’ve identified? So it’s a lot of articles of the law, we’ll go through the main points, but of course, if you need to make your software compliant, you need to do a lot of work. So the first part is data and data governance, essentially keeping under strict control datasets used for training, validation and testing. Okay.

So you need to track very carefully the process of data collection, origin of data, and the purpose of data collection, because maybe for privacy reasons, you are authorized to use those images for marketing, for example, but not to train an AI, then the examination in relation to possible biases that can have a negative impact on fundamental rights. Okay. So the data set should be built in a way to minimize bias as it’s written in the next point.

And then, of course, the data set should be representative. Let’s think about face recognition. If I don’t train the system on a data set which is relevant and has more or less the same proportion of people of different ethnicities, for example like in the country where I’m using it, then the result will be wrong. It happens already.

And it should be free of errors and complete – which is, databases are huge to be complete and free of errors – it’s quite a challenge and this puts a kind of a bigger responsibility on the vendor, because for AI, the data sets are the most complicated and the biggest thing that we have to create, and this should be checked under control. It’s pretty correct.

Then record-keeping. Okay. The system should have logging capabilities. For example, recording the period of use, the reference database, because the database that they’re using now, maybe it’s different from that of five years ago, the input data that has led to a match, natural persons involved in the verification of the result, because they must be verified. I see here a lot of privacy complications, saving all this data. Again, having one without the other is not always easy.

And then another big thing about AI is transparency. So they should – there should be enough information given to deployers to understand the output of the system and to use it. So for example, what’s the purpose of the system, when and where it does work, how robust it is, in which situation it can give wrong results, and things like that. Again, where can it be misused or when the conditions do not allow to use it. And then there are a lot of other parts, like providing information that is relevant to explain its output, performance in specific personal groups, specification for the input data, information to interpret the output.

All of this is not easy. In fact, they put kind of workarounds – “where applicable,” “when appropriate,” “where applicable,” because this is the big problem with AI. Especially the ability to explain its output. Very rarely we are able to explain the result given by AI. They are like a black box. That’s the main issue, let’s say for AI, and then we have the human oversight. So the big point is they should be effectively overseen by persons, so this person should be able to detect the anomalies and unexpected performances. They should be aware. So there should be education.

There is what is called automation bias – our tendency to trust instinctively, maybe too much, the result given by a machine because normally machines work better than humans, maybe, and also to be able to interpret the outcome. Okay. And of course, they – the human should be able to override it, not to use the AI system or reverse its output and interrupt the system with a stop button. Maybe it doesn’t make sense for some applications, but for something that has a physical impact, a stop button is pretty important.

Okay, so we’ve seen very quickly the compliance, the main compliance topics for high-risk systems. But there is another category, what we call earlier “certain AI systems” that we can oversimplify – it’s AI image and video generation tools, okay? What are their obligations? And there’s right here in the Article 50, okay?

They say “providers of AI systems including generative composed AI system generating synthetic audio, image, video, or also text, shall ensure that the outputs of the AI system are marked in a machine readable format and detectable as artificially generated or manipulated.” Essentially, any output created by generative AI should be digitally signed or watermarked.

This is already done by certain mobile phones that already put some information. Also, some of the image generation tools, they put these watermarks or metadata that, of course, are not foolproof always. What I’m mostly worried about, even though it’s not our topic, is text content, because how – there are somewhat some ways to watermark text content, but yeah, it’s not as easy.

So we prob – the – we solved the problem. We did fix, right? Because they are all watermarked, so we can easily find – no problems. Of course not, because first of all, this is a European law and not all the providers are in Europe. And of course not everybody respects the law. Otherwise, we won’t be here. And then of course there are a lot of open source tools that make these unenforceable. Because even though an open source library or tool has the, let’s say, the watermark feature, being open source, a programmer can easily remove it.

So there are some exceptions. Even though we have AI-generated or manipulated images, we don’t need this kind of disclaimer, this transparency, if it’s just an assistive function for standard editing. For example, I use a traditional brightness and contrast adjustment, but the optimal settings have been found by AI. That is an assistive function – where authorized by law to detect, prevent, investigate or prosecute criminal offenses. The most obvious thing I can think of is a fake social media profile done with AI-generated images and maybe about the text.

Of course, if I’m a kind of undercover agent trying to investigate something with a fake bot or whatever, I cannot put “I’m a fake.” And this most important for us exception – “when they do not substantially alter the input data provided by the deployer or the semantics thereof.” And we can have a big discussion.

So this is one of the tools that we tested for AI enhancement. So we took a picture of Tommy Lee Jones. We made it smaller without scaling and then we made it bigger with an AI tool. Okay. So if you have this picture on the right, you can see it’s a pretty high-quality picture. Okay. And I can also say that it resembles a lot the picture on the left. Okay.

But on the other hand, if I need to use these for identification of a person, this is completely unreliable, completely dangerous because it looks like something I can trust. Of course, from the picture in the middle I can’t say much, but at least I wouldn’t over-rely on it.

On the picture on the right, I seem to have some good material for investigations, but actually the nose shape is wrong. The pretty peculiar eyes of Tommy Lee Jones are completely changed. There is also a distinctive trace, a kind of red mark on his forehead and it’s gone. So you can see these – this – does this substantially alter? This is a very bad definition. Okay. So this is important.

So this was for who creates the tools. This is for deployers. So you should disclose that the content has been artificially generated or manipulated. Then there are exceptions. For example, if you do something that is artistic, satirical, and this would hamper the enjoyment of the work, then it’s attenuated, the requirement. Again, when authorized by the law, you can avoid it, but still responsibility on your end too.

Okay, so let’s come to the conclusions. So I want to compare these with the preexisting position that we had on AI. I’d been studying this topic for years. Probably now this post is from 2021 that you see linked here. If you have been following this, you know that what I did, I was dividing these into kinds of categories. If we did evidential use, so to use the result as evidence, or investigative use, like a lead, like intelligence, stuff like that. And then I divided enhancement – so when I have – I’m producing another image – or analysis – went from an image, I get a decision like face recognition.

And in general, my big no has always been using AI-based enhancement for the reason that we have seen on the previous slide, no, because it’s not explainable. And there is a bias from the training data. For investigative use, we could probably use it. But I need to be sure that intelligence doesn’t become evidence because then it’s a big deal.

So like putting a watermark, something writing, “not for evidential use” or something like that on the image. And I need to educate operators about risks, because if you over-rely on this image – you look for this guy and you find that actually Tommy Lee Jones and it’s different – then you can go in the wrong direction.

But what regards analysis – you can probably use both for evidence and intelligence with safeguards. So only for decision support. So the human must always have the last word. I should know the reliability, when it works, when it doesn’t, what is the success rate, and mitigate the bias of the user. How can we compare with the AI Act?

Of course, they are minor kind of rules of thumb, very specific for our field. The Act is huge, much more detailed – it’s a law, which is not a blog post, let’s say – but you can see there are pretty – the key points are more or less in line. You see “only for decision support” is the article for human oversight. “Know reliability” is transparency and provision of information to deployers.

“User bias mitigation” is in the article about AI literacy that we didn’t go through, but it’s also another part of the AI Act and the risk management system. One thing that I didn’t write about that I think is pretty important is the data governance. Of course, it’s not much about how the user uses the system, but how I train the system with the proper data sets.

So let’s summarize what we’ve discussed. So probably – I say probably because again, this is not legal advice, this is my interpretation – probably high risk when performed with AI: image and video authentication, not just the deepfake detection, but also the detection with AI of traditional projects, and also face recognition on the post analysis of recorded video. Again, in real time, it’s prohibited, same for some specific situations. So these are the two things to be aware of.

Probably not a risk when performed with AI: image and video search and content analysis, license plate recognition (for now, maybe it will change, but for now it’s okay), image and video redaction, and image and video enhancement. But, as you have seen in the last part, according to the AI Act, image and video enhancement is okay, but the image must be marked as AI-generated or manipulated. This is very important. And again, the AI Act allows it – it doesn’t mean that it should be used for evidence. This is just one part of the law, and it’s not specific about, let’s say, investigations.

I want to leave you with some final notes and thoughts. Maybe it could be also – we can have a small discussion over the chat. I didn’t manage to keep up with the chat, of course, because I was speaking. So again, I’m not a lawyer. So for official legal advice, discuss with your legal counsel. Things can and will change because the technology is changing so fast and there are so many things that the law needs to define for actual application, as right here.

There are – even though some parts are already in effect, especially the prohibited activities – to actually be compliant, there are many more guidelines that we expect to come because now it’s too bad. Okay, and it’s, of course, it’s not time yet, and there is one of the questions that I was asking myself.

So let’s speak, for example, about deepfake detection. Okay. There are lots of websites, tools that claim to be deepfake detection with different rates of success. Okay. Let’s say I’m developing one of these tools that I put on the wild just for, maybe for people just to test stuff on social media, and you, as law enforcement, use this tool, which was not, let’s say, developed explicitly for law enforcement use, but still can be used.

So we have identified that deepfake detection potentially is a high-risk activity, but this developer didn’t follow the AI Act because it’s in some other part of the world and didn’t imply it to be used by law enforcement. So who is responsible for using it? Probably you, but also this developer? I don’t know, but it’s something that left me thinking.

Then, this is a big one. We’ve seen over and over speaking about the fact that compared to many other countries that are very aggressive towards adoption, in Europe we have this Act. You have seen it’s pretty stringent. Okay. So is it a risk for innovation? I think so. But different countries have different values and focus. And if you go to read the fundamental values of the European Union, the focus on privacy, attention to human rights, fundamental rights and things like this – I think it’s very coherent.

So you may not like it because you would like some – this group of countries to be more aggressive and not be limiting the technology. And we can have different opinions. I can be more conservative, you can be more aggressive. It doesn’t matter. I think it is coherent with the fundamental values of the European Union.

And again, yes, probably it is a risk for innovation. And I think related to this, I think it’s very important with adoption of AI – I think this is the kind of big thought I want to leave you with. Let’s imagine this AI becomes a kind of an oracle. It’s an oracle, which is a black box. Okay. That is almost always right. Okay. Let’s say 99.99 percent of the time, it’s right.

For law decisions, putting some person in jail for life is probably more right than people – they’ll say it is for now, but let’s put aside and let’s say that still, you trust it, but you don’t have a way to verify when it actually works or not. You don’t know how it works inside. Would you trust it for critical decision or not? This is almost always right, but you don’t know why and how – would you trust it or not? And I think this is a big question that I leave to you to reply because everybody can have different opinions on this and that’s it.

Here we have the QR code and the link to my big blog post with more or less the same content as this webinar. I hope you enjoyed – I know it’s a heavy topic. It was heavy for me to study, but very interesting. I hope I did make it a bit clearer and yes, thanks for being with us. Let me check the comments.

Oh, I see. There is a lot. Yeah, we have a question from Carrie. Okay.

Okay. Three minutes ago, one of the last ones. Okay, let me check. Thank you. Thank you. Okay, can you speak a bit about the AI Act implication for DeepPlate? As I said, according to the AI Act, for now, there is no – it’s not a prohibited and it’s not a high-risk activity. Okay. On the other hand, as you – let me go back to the slides so it’s clearer.

Okay. It’s a tool that’s for analysis. Okay. So we don’t do enhancement with DeepPlate. To be transparent, actually inside DeepPlate, it also creates an image. And in early versions of it, we also showed the image. Internal version for development. What is the problem? That once you show the image to a user, and it was a very nice image, you give too much trust to it. Okay. While if I just give you the numbers with a level of confidence, you end up more skeptical, as you should be.

We implemented DeepPlate with all the safeguards. First of all, there are disclaimers everywhere that it is only for decision support and to minimize bias. We also tell you, you first need to analyze yourself the license plate and then use the tool. So you are not biased by its results. And then about the non-reliability, we did a lot of testing with real license plates. We are seeing if we are able to publish a paper on that as well. So we put some safeguards, but again, for the AI Act, it is not an issue at all.

Leave a Comment