Deepfakes and computer-generated images have been around for a few years now, and they’ve become quite popular. Amped Software has recently authored an article in the Evidence Technology Magazine about the challenging task of dealing with deepfakes. Much like it happens for cryptography-vs-cryptanalysis, a fight is going on here between those developing advanced neural networks, capable of creating more and more realistic fakes, and those aiming at detecting them. This fight is mostly guided by researchers, but video forensic analysts and everyone dealing with forensic image analysis are necessarily involved, since they may soon have to face deepfakes in their investigations!
A few months ago, Amped Software had the chance to interview Dr. Cecilia Pasquini. Cecilia is an Assistant Professor at the University of Trento (Italy), which hosts one of the world’s excellencies among Multimedia Forensics research groups. Cecilia and her colleagues made an experiment showing computer-generated faces to lots of people, asking them to distinguish between real and synthetic pictures. Results are impressive: images created with an “outdated” (2017’s) neural network were still detected fairly well, but when a more recent (2019’s) network is used, people get it wrong most of the time and even put more trust in synthetic pictures than in real ones! This stunning discovery suggested them the title of the paper: More Real than Real: A Study on Human Visual Perception of Synthetic Faces.
Amped thus contacted Cecilia and asked her to explain more about what deepfakes are, what is their impact on society, what researchers are currently doing to fight this phenomenon. Watch the full interview here:
Contact Amped Software for more information.