In a detailed new article, Martino Jerian, CEO and Founder of Amped Software, analyzes the significant implications of the AI Act on the world of forensic investigations. The article, titled “How Does the AI Act Impact Image and Video Forensics?“, was published on Amped Software’s blog. It explores the challenges and opportunities presented by the AI Act, which aims to regulate the use of artificial intelligence in various sectors, including image and video forensics.
Passed in August 2024, the AI Act introduces a comprehensive legal framework designed to safeguard fundamental rights, protect public security, and ensure the ethical use of AI systems. With its phased application through 2027, the law categorizes AI systems based on risk levels—ranging from prohibited practices to high-risk systems. Martino’s article explains that AI technologies used in forensic contexts, such as biometric identification and image authentication, are classified as high-risk and thus face stricter oversight.
Martino underscores the importance of understanding these new obligations. He states that forensic professionals must be aware that compliance with the AI Act is not optional. The law introduces significant penalties, with fines reaching up to 7% of global turnover for organizations that fail to meet its requirements.
The AI Act’s core mission is clear. Martino notes that the regulation aims to “promote the uptake of human-centric and trustworthy artificial intelligence while ensuring a high level of protection of health, safety, and fundamental rights.” This is particularly crucial in forensics, where the use of AI systems can directly affect the course of justice.”
According to Martino’s analysis, key forensic technologies such as facial recognition on recorded video and AI-based image authentication are classified as high-risk. He highlights that the AI Act places stringent compliance measures on these systems, requiring transparency, human oversight, and robust risk management. He explains that the AI Act specifically targets systems used in law enforcement, including those intended to be used for “evaluating the reliability of evidence in the course of the investigation or prosecution of criminal offences.” This means that forensic technologies like deepfake detection, or even traditional forgery detection, will be under intense scrutiny.
Martino also draws attention to prohibited practices under the AI Act, particularly the creation of facial recognition databases through untargeted scraping of images from the internet or CCTV footage. Additionally, the use of real-time biometric identification systems in public spaces for law enforcement is prohibited, except under tightly controlled conditions. These measures reflect the EU’s commitment to preventing the misuse of AI in ways that could threaten individual privacy or public security.
While he acknowledges the need for such safeguards, he also expresses concern about the potential impact on innovation in forensic technology. There’s no doubt that the AI Act is a critical step toward responsible AI use. But there’s a risk that such stringent regulations, particularly in Europe, could slow the pace of innovation in AI-based forensic tools. As we’ve seen with the GDPR, compliance can be a heavy burden for companies, especially smaller players in the market.
The AI Act also stipulates that all high-risk AI systems must undergo thorough documentation and risk assessment processes. This includes ensuring that training, validation, and testing data are free from biases that could affect the outcome of forensic analysis. Martino points out that this emphasis on data governance is a welcome development for forensic professionals. Ensuring that AI systems are trained on representative, unbiased data is critical in forensics. The integrity of evidence can be compromised if the tools we use are not properly vetted and tested. The AI Act pushes the industry to adopt higher standards, which is ultimately beneficial for the credibility of forensic analysis.
As Martino outlines, human oversight remains a key requirement under the AI Act. The law mandates that AI systems used in high-risk scenarios must include mechanisms that allow human operators to monitor and, if necessary, override the AI’s decisions. He stresses that during forensic investigations, AI should be seen as a decision-support tool, not a replacement for human expertise. Analysts must always have the final say, and the AI’s output should be transparent and explainable, so that the human operator can make informed judgments.
One of the key quotes from the AI Act that Martino highlights is Article 14, which states that “High-risk AI systems shall be designed and developed in such a way, (…), that they can be effectively overseen by natural persons during the period in which they are in use.” This ensures that AI remains an aid rather than an independent decision-maker in critical situations like forensic investigations.
As Martino discusses in his article, the impact of the AI Act will likely extend beyond Europe, just as the GDPR reshaped global privacy standards. He notes that the AI Act is expected to influence AI regulations around the world. It will likely inspire similar regulations in other regions, especially as AI becomes more prevalent in law enforcement and judicial systems. Forensic professionals across the globe should be paying attention to this law.
Martino’s article offers valuable insights for forensic practitioners, legal professionals, and technology vendors alike. He concludes that while the AI Act imposes significant responsibilities, it is a necessary evolution for maintaining public trust in AI systems used in sensitive fields like forensics.
Read the full article now on the Amped Software blog.