AI In CSAM Investigations And The Role Of Digital Evidence In Criminal Cases

Si: Good morning, ladies and gentlemen, boys and girls, friends and enemies. This is the Forensic Focus Podcast. Desi and I are here at opposite ends of the time zones. It’s not really early morning for me, it’s 8:30, and it’s not really late night for him because it’s…

Desi: It’s only 6:40.

Si: Oh my God, 6:40. Yeah, that’s reasonable. But he’s been skiving off all day, so it doesn’t really matter! So yeah, so as is the way of these things, we start off off air (although I’m starting to think we should just hit record when we join and then, like, we’ll cut it later and leave these things.) We were discussing some stuff that we’ve came across on the back of last week. And Desi’s put two papers forward, both from DFRWS…the US one? Yes, DFRWS USA.

Desi: Yeah.

Si: ’22 and 2019. the CSAM one as you were saying. Both from DFRWS to do with the use of, or the relevance of, machine learning and artificial intelligence (I hate that term so much!) in forensics and various things. And obviously we’ll link to those in the show notes. But we’ll start with the older paper.


Get The Latest DFIR News

Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month.


Unsubscribe any time. We respect your privacy - read our privacy policy.

And we were just discussing…so the paper is about a “practitioner survey exploring the value of forensic tools, AI filtering and safer presentation for investigating child sexual abuse material (CSAM)” stuff. And it’s a paper (we’ll put links in) by Laura Sanchez, Cynthia (I’m so sorry people, I’m gonna get this totally wrong I can do Cory Hall, that one’s easy) Ibrahim Baggili and Cynthia Grajeda. (There’s probably a better pronunciation of that J in whatever the original language that it originates from is, but I do apologize.)

So, we were just flicking through it now. I think Desi’s read a wee bit more of it than I have. But we were…the question that came up from it is: is AI just an automation tool? And I think you’ve actually hit the nail on the head there. It has to be at this point in time, otherwise we’re allowing for a machine to make intelligent decisions. Now, AI is such a false term in my opinion.

Desi: Yeah.

Si: You know, we are talking about statistical analysis here. That’s all it is, is statistical analysis. Machine learning is statistical analysis. Either in a constructive way, if you’re talking about something like a deep fake, or in a reductive way, if you’re talking about looking at something and saying, “is this child pornography or is it not?”

Sorry, again, I’ll…let me rephrase that. I got into the industry too long ago, and the phrase “child pornography” is a term that was used back in the day. It’s not. Pornography is something that consenting adults can agree to. This is child sexual abuse material. And hence the fact that we call it CSAM and it’s not…it shouldn’t be related to pornography, which is perfectly legal and is entitled for people to create and distribute and enjoy as they wish.

So, CSAM, and we’ll keep referring to that. In the UK it’s IIOC (indecent images of children), in the rest of the world, CSAM. So, we can pick an acronym and we’ll stick with it. We’ll call it CSAM. But at the end of the day, you’re going to identify a piece of CSAM and present…it has to be presented to an examiner for confirmation. You can’t have a machine deciding that this image is CSAM and attempting to guess the age, and attempting to classify it in terms of its severity. Okay, so there are, there are different levels of severity. If you want to know about them, look them up. I’m not gonna go into that here. It’s unpleasantly enough as is.

So, AI can assist, that machine learning can assist, but I wonder actually if perhaps there’s not a risk, certainly in terms of categorization and age estimation, if there’s the kind of self-fulfilling prophecy, if you put a number and the category in front of an examiner and they’re borderline on it, they will err towards going with what the software tells them it is. Now that could be a good thing.

It could be that the software is reasonably accurate. It could be that it’s downgrading it, or it could be that it’s upgrading it, all of which are, you know…either an upgrade or a downgrade is a bad thing. Either from the perspective of, well, both from the perspective of justice being done, but in favor of either the defense or the prosecution. And so in that regard, it has to be an automation tool. It can’t be allowed to make decisions.

And actually it has to be relatively restricted in what it can be allowed to say. Perhaps, you know, things like age categorization and severity, both of which are hard jobs. And you know, I do CSAM cases, (I have…in fact, I’ve got one this week I need to go through.)

But I…at the beginning of my report, it says, “I am not an expert in classification. I will not take a stab at figuring out how old somebody is, and I will not figure out what the category is.” I mean, the categorization is relatively easy to a certain extent. But, you know, that’s not my place to do it. I’m a digital expert. I know what a computer is, I know what a file is and I know where it’s been and what…where it’s come from when it was created.

But I couldn’t tell you that, you know, somebody in it is 15 or 25. I mean, I might be able to manage 15 or 75, but that’s a pretty broad guess. And you know, there are specialists and there are medical professionals who have a far better understanding of child development and, you know, the way that people grow and change over time, that, you know, are way beyond my knowledge and therefore it’s not something that I would want. The idea that the machine can do that…

I mean, logically yes, if you can distill a hell of a lot of knowledge into a machine learning environment by showing it images and, saying, you know, this…and machine learning’s been very interesting. Actually, there was an article yesterday and I’ll link to it. And in fact I’ll bring it up and I’ll see it, I’ll find it first, then I’ll bring it up on the screen. But it was about machine learning being used in drug creation … They were using it for developing antibiotics. Hang on, let’s have a look and see if I can remember what the article was…

Desi: But I think that’s just it. Like, you started talking about AI and then you mentioned, like, you can get a lot of benefit out of automating with machine learning, and that’s just it. Like, that was the article and that’s what sparked my question is, when I read the artificial intelligence section on the CSAM article, it talks about machine learning to start with, and then talks about training and network, and then just using it as an automation tool. So it’s, like, it seems like such an abused term within the research community to say something’s AI when it’s not.

Like, it is just machine learning and tool automation and humans are just training computers to “think” (in quotes) on how to do things. And then I guess to your point about like training it to do the age estimations, I do know…and I don’t know whether it’s gotten better, but I remember reading a paper where it was good for…well now there’s two of you, Si.

Si: I know it’s amazing, isn’t it? Sorry, let me mute myself.

Desi: …like a Caucasian data set, but if it was any other ethnicity it was really poor at telling kind of like traits and age and even gender sometimes, it was really bad at distinguishing between it because I guess like it wasn’t trained on that data set.

And that could potentially be an issue, particularly in the CSAM space, like if you’ve got ethnically diverse children who are being abused, is potentially that set missing something? Because like, I don’t know how LEA uses it, like whether every time they’re manually going in and checking, or there’s some cases where maybe data sets are just being used because they’re trying to protect people from, I guess, the psychological impact that that has on the analyst.

Si: Yeah. I mean, I think because we…it is difficult, it’s difficult. I mean, and yes, we want to prevent psychological damage on the analyst, but at the end of the day, we are professionals. We don’t roll out (well, I think we probably would if we could), but we don’t roll out, you know, automated doctors because we’re worried about the impacts that, you know, seeing some blood is having on a doctor.

You know, A&E medics are, you know, immensely stressed, they’re immensely under the caution and arguably there’s a lot more pressure. I mean, we’re…there’s a huge backlog of cases in the UK in terms of forensics. But, you know, ultimately we could do with far more doctors than we could with friends and analysts! They’re kind of more important, I would argue, and I say that as a forensic analyst.

Are we trying to absolve ourselves of responsibility in the interests of our better health? I don’t know. We take on these roles because we think it’s important to do and because it’s important that it’s done right. And although there are things that, you know, can make this life easier. One of the best things is large hash data sets of CSAM that exist so that we can do a match against known CSAM that’s out there, and that already…

I mean, that’s fantastic. And to be honest in certain cases that will be enough to get a conviction, that’s fine. You know, nobody needs to look at it. You’ve got an ID…you’ve got an image, which we know the origin of by now. We actually, you know, that that child has been secured and protected, and is hopefully getting on and having a new life. And that is properly categorized, because we know the age at that point in time, we know the offense, and therefore that’s enough to secure a conviction.

The fact is that anything that is new needs to be looked at by a human at some point because we need to start to do the child protection aspects of it. So sooner or later somebody is going to have to deal with it. And that should be, you know, it’d be in the social services sphere and like that. We can do our best. I think perhaps the…you know, looking at the article, what it perhaps is suggesting that it might be able to do is, to a certain extent, extract things for some parties that are less damaging in the sense that, you know, you could just do face extraction, for example.

Desi: Yeah, it’s true. Yeah.

Si: And then you could be using that in child protection, okay, you know, identifying people. And that has a value. Especially whereby you are not necessarily presenting it directly as “evidence” evidence. You know, evidential value has to be high. You know, this has to be the truth. You know, you can’t take an interpreted and upscaled image and then pass it off as being entirely accurate.

Whereas if you take an interpreted and upscaled image in order to identify someone, you can then crosscheck that by going and talking to them and saying, “is this you?” Kind of thing. So you, you know, there are methodologies and sorry…hang on: coffee! (Thank you very much! Ah, bliss.) So yeah, so in that regard, I think that there is definitely some value. But for evidential stuff and to look at it.

And again, the trouble is that if you say…you kind of have to watch the whole movie…if it’s a video file, you have to watch the whole thing for its content. Because at the beginning it may well be one thing and at the end it may well be another.

And, you know, how does one interpret if a machine learning…and this is a problem with statistics, actually (not the problem necessarily, it’s just nature of statistics), is that if you have, let’s say, a video that is the first 20 minutes of it are perfectly fine and the last 1 minute isn’t, statistically that may well not flag as CSAM, even though at the end of it, it is. You know, I don’t know any manufacturers who are doing this.

I think the…where I first sort came across it was the idea that a simple statistical analysis to identify images that had a large content of skin tone, for example, is generally indicative of either photographs of a beach, or something that is potentially something that you want to flag for further, you know, further examination.

Desi: Yeah, true. So maybe in, like, if you had a full set of photographs and you identified manually one that was CSAM already, then you could parse that full step through to get a broader understanding of how many of those photos were potentially CSAM from that set.

Si: Yeah. I mean, I think coming back to your original question, which was: is it an automation tool? I think the answer is: yes, with caveats. In the same way that a triage tool is: yes, it’s a tool with caveats, which is, it does…a triage by definition doesn’t look at everything.

And I think in a similar way, artificial intelligence (or machine learning) is a tool, but it doesn’t look at everything. And whether it actually looks at every bit of data or not is, you know, that’s open to debate. But what it doesn’t do is it doesn’t look at it with an interpretive eye. It looks at it with a statistical eye.

Desi: Yeah. There’s no, like, objective thought behind the process itself. Like, it’s not making decisions. It’s not making, like, as an investigator, you may look at a file in a folder and go, “oh, I’ve seen this before,” or “I know the TGP of this attacker, so I’m just gonna jump over here and look…”

Like, to me, like, those investigative leaps where you are associating something that’s not directly involved is a hallmark of, I guess, investigation intelligence, right? And I haven’t seen a tool that does that yet. Like it…if you tell it to go look at these places, it will, but if there was something new that was not really associated, like it’s not gonna make that jump and go and find that.

Si: I’m trying to remember who it was that was telling me this, but somebody was talking about…

Desi: So, I was just thinking about it because I heard…it was on Triple J, I think, which is our, like, national broadcast…one of our national radio stations. And they have a segment I think every week with Dr. Carl who was like a really prominent scientist. But they had a guest on it and I’ll have to figure out where it was and link it.

But they were talking about…it was pretty much like an entire episode on ChatGPT and they were talking about it and, like, they’d want to see where the ChatGPT will eventually make investigative leaps. Like if you hold it to go figure something out, it would make a jump between like a pivot that wouldn’t necessarily be straightforward, but a human would do, is kind of where they wanna see if they can get ChatGPT to go next.

Which I thought was interesting. Which would be a good step, right? Like it’d be like the start to AI, I guess. Because that’s what sparked me reading this paper and looking at it was, I don’t think AI’s…like, the way it’s defined, I don’t think it’s real when everyone uses it as AI. Like, it’s just machine learning and automation.

Si: It’s interesting because I mean, I, you know, (confession time), I actually read artificial intelligence at university a long time ago. I went to Edinburgh and I studied under several people. One guy, it was a guy called Chris Mellish, who was the guy who invented Prologue as a programming language for AI stuff. He was my tutor. And AI is the field of trying to make computers do this. And as of yet, we haven’t succeeded.

So, to label anything as AI is a complete lie. There is no AI. There is machine learning. There is the applied statistics, which it is, which, you know, let’s say…let’s be honest, you know, things that we’ve shown last week (the week before last) are absolutely impressive. There’s no two ways about it. You know, the fact that you can create an image that is nigh on photorealistic and you can have…

I mean, I’ve got a ChatGPT open in the window here. I’ve had it writing haikus, and it writes haikus way better than I do because it can remember what the structure’s supposed to be for a start. And, you know, it’s incredible, it’s amazing, but it’s not intelligent. And the amount of people that are using it as a sales pitch and misusing it as a label is astronomical. And it’s not right, in my opinion.

But with that comes this massive risk that people will believe that it is capable of something amazing. And it isn’t. It is…to a certain extent, it’s kind of like the early conversations about the cloud. It’s like, “the cloud will be the solution to all of our IT problems.” No!

The cloud moves your IT problems from one place to another, potentially, but it’s not this panacea of amazing…thing. And AI is that as well, it’s not a panacea, it’s a tool at the moment. And I think we should actually, you know, without being too sky nettie about it and paranoid and dystopian, we should actually be slightly worried if we start to create computers that can make decisions and can think for themselves.

You know, Asimov’s three laws of robotics are very important and, you know, I wonder how the…my experience of the general literacy of the population is sadly disappointing. But, you know, things…we need people to be reading things, if they’re not banned in the US, things like 1984, and, you know, Asimov for the three laws of robotics and Brave New World.

And if nothing else, the Hitchhikers Guide to the Galaxy to see what happens to Marvin, the paranoid android when he attains consciousness and then realizes that everything is hopeless and just doesn’t wanna do anything, you know! We should be very careful about selling this as a panacea without, you know, the true appreciation of potentially the…well we’re concerned about atomic weapons now, you wait until we create a machine that can think for itself. I mean, you know, that’s gonna be scary.

But yeah, it does to me seem to be very disingenuous that people are saying that their product has this instead of saying that “our product does some very clever statistical analysis”, which, you know, certainly in our field is definitely understood and definitely appreciated.

I’d much rather somebody said, “look, you know, we do a really detailed statistical analysis.” And I’m gonna call Amped out on this because I’ve found their products to be very good. And then explain what it is that that statistical analysis has done. And then when they finish doing it, give you the results in the mathematical way that shows how they got to the answer. If you do that, it’s science, not snake oil. And that’s cool.

Desi: I was just gonna say that. Like, I’m immediately skeptical whenever, like, someone reaches out on LinkedIn or, like, you go to a conference and you see all the vendors there and as part of their advertising, it’s like, “we use AI to model things.” And then it’s when you talk to the vendor or, like, reply to whoever back, you’re like, “so what is the AI doing?” And usually, like, they don’t know, like, they’re just salespeople (or pre-sales).

They’re generally not the engineers behind it. And marketing’s probably put it together and they’re like, “oh, AI is cool, like, let’s just say that it’s AI”. But immediately….in my mind (and I’m sure it happens to a lot of people that are in the industry), it immediately discredits that vendor or that person that’s reaching out, or that brand. Because saying AI and then not being able to ex…like, fair enough, like, things are secret and you don’t want to tell people how you do things and because that’s how you get a market edge.

That’s fine. But you should be able to explain in some way on how AI is helping, or, like you said, like, Amped provides the kind of mathematical ‘how we got here’, because you probably have to use that, right?

Si: Well this is, it is…that certainly in our field…I get that there needs to be some commercial sensitivity, but the bottom line is that your product does something, somebody needs to stand up in court and say what the hell has happened and how it got to these conclusions.

Either it’s gonna be: I have to put in a call to your guys to come to court to explain, which may be your business model and you want to claim costs for that (although I suspect it’s not gonna scale well), or I have to understand how it does it so that I can explain. Something which is a proprietary secret that is kept back and not disclosed is difficult.

And, you know, I’ll sort of allude to something similar in the UK, there’s been a very large set of cases around something called EncroPhones and EnroChat, which was an encrypted telephone network, you’ve probably heard of it (and we’ve talked about it before, I think). But the bottom line is that from a defense perspective, we can just simply say we cannot confirm or refute the accuracy of this information because you haven’t told us how you’ve acquired it.

You’ve given this a high level wishy-washy explanation of what’s happened, but we have no raw data to examine, the methodology with which it’s been achieved is obtuse, there’s no code to examine, we can’t, you know, test the tools to verify that they do it accurately. We can’t…so all of the steps that we would normally be able to carry out are hidden in the midst of official secrecy…things….for various countries. And that doesn’t actually really make for a fair trial.

Now, I think in a lot of cases that I’ve seen that have gone through with the EncroPhone, it really doesn’t help that people have taken photos of large wads of cash in the middle of their living room along with a pile of drugs, and then taken a few photos of their car, sent them to their mates, and then talked about their kid getting sick at school on the day they went to pick them up.

You can see, you know, the police have been able to abstract the intelligence from the content once the tool has been discredited. So, they’re safe on that ground. And ultimately, I guess that’s what we would have to say from the AIs, is that if you can then extract the evidence from it by, you know, parsing it through a human, that can then confirm or refute perhaps it has a value. But, you know, it’s a slippery slope, isn’t it, to have a black box that spits out answers? And you don’t know whether it’s spitting out the right ones, the wrong ones, half of them…what the actual result is.

Desi: I have a story about that before…we’ll probably jump across to your topic on the article that you sent me, because I would love to talk about that on tonight’s show. But it was back in 2017 we’d been, (where I was working for) we’d been approached by a vendor, very well known, like, it’s used in a lot of court cases. And one of their solutions engineers had come out to try and get us to purchase it, to use it in our job, because we did do data collection and a little bit of forensics.

And I was having a chat to the solutions engineer about how the program collects…or how their devices collected data and then processed it. Because, like, I’d used their stuff before, but I was like, “okay, so it collects the data. It should be in a raw format at least, so it hashes it first before ingesting it into the tool.” Because it had its own proprietary, like, format system, like a lot of the big ones do.

And he was like, “no, like, we just put it in our format and then we hash it.” And I was just like, “okay, well cool, you’ve got your format, that’s fine. Like, there’s probably open source tools that parse it and then, and they do. But how do you hash it? Like, once it’s in your program?” And they had a proprietary hash and I was like, “well how do you confirm that?

Like sure, if you’ve got a proprietary, why don’t you also do a SHA-256 so that if I want to do my own forensics with other tools, I can compare it to the output that the tool is giving me versus, like, how do I know your tool’s not doing anything with that data?” And they’re like, “no, no, it’s fine. Like, we’re approved by courts and, like, a lot of the people use this.”

And I’m like, “cool, but I don’t trust you. Like, how do I know you don’t have a rogue engineer at one point? Or how do I know your tool doesn’t break and does something weird with the data and it’s not presenting it correctly and I need to verify it?”

And it just, like, I could see the thought going in, but I don’t think he’d ever worked in a forensic…like, where forensics was…he needed to follow a chain of custody and verify data. Like they’d just been building tools this whole time and they’re like, “no, we’ve got this.” And I’m like, “it’s trust but verify all the time, like…”

Si: Yeah. I remember…I…so, you know, I works in security and we looked at, sort of, cryptography as a big aspects of security work, and you know, you bounce around it in long enough and it’s like, you know, “what’s the best advice for writing your own cryptographic libraries, your best cryptographic protocols?” And the best advice is: don’t.

Fundamentally, you know, there are known good, solid tried and tested cryptographic protocols. And the same for hashes. If somebody said that they had created their own proprietary hash, my first reaction would be, “no, just no, absolutely not”. Because unless it’s been tried and tested, how do I know about collisions? How do I know about that space that it’s in?

Desi: How do you know about vulnerabilities? Like, depending on what you’re using your cryptograph for, like, if you’re using it to secure something, like, there’s probably someone smarter than you that can find a vulnerability in that.

Si: Yeah. And that’s why the tried and tested ones are so good, is because a whole bunch of people smarter than me have been involved in doing it! And I’ve got a lot of faith. And even then, you know, you see, you know, MD5 was kicking around for years and years. And, you know, to be fair, MD5, it’s still not a bad hash as it goes, you know, in terms of realistic real world attack.

It’s incredibly challenging. It’s not impossible. And that’s why we’ve moved on to SHA. It’s not impossible. It’s doable. It’s logically and feasibly achievable, but you know, could the man on the street do it? No. And therefore, you know, the chances of the hash that I’ve collected today on MD5 and the hash that I collect tomorrow on MD5 from the same disc. If they match, am I worried about that disc having changed?

No I’m not. And it’s quicker than SHA-256! But, you know, realistically, but for purposes outside of that, for things on the web if I’m downloading and verifying a package, no, I’m gonna be using a cryptographic protocol that I’ve got more faith in that has a higher security. So, yeah. No…I…that’s bad.

The story I was hearing the other day was I think I know who it was and I’ll…we’re gonna talk to them later and I’ll try and get them to reiterate it. They probably won’t because it may be confidential (he says thank you very carefully about what he could say)! But no, somebody came to them to ask them to test the tool in order for validation purposes. It was a forensic tool.

And then they were talking about where…so, it’s looking for certain evidence, this, whatever this forensic tool is. And it has certain expected places and it has other places in the file system where it wasn’t expecting to find evidence. And they were like, “right, so you’re gonna put the evidence into places we expect to find it?” You’re like, “no! Sorry, you want me to set up the test so that your tool passes it? And then that’s gonna get you your validation. No, I don’t think so.” So I dunno what the outcome of that was. But that sort of similar vein.

Desi: Probably them finding someone else to test it and validate who would put in the right spots. So, I guess this is, again, it’s why you should always question the tools that you are using. Even like in the example that I give, and I’m not going to name and shame because they may have changed since 2017 when I was looking at them. But even well known tools, like it, it’s still worth questioning how they function, what you’re getting out of it, and always test and verify with at least one other tool.

And every forensics course I’ve ever done always says that, like, don’t trust just one tool because it may be interpreting things differently or it may be incorrect. And you see that with vendors that do come out and publish changes to their tools that say, “hey, between these dates with this patch, this information was wrong, so if you had had cases, go back (and if it was critical), go back and have a look at at your dataset.”

Si: Yeah. There have been several fairly high profile things of that nature. Yeah, absolutely.

Desi: Alright. Good chat about AI. But you sent me a news article which was pretty interesting. I had a read this afternoon where a 32 year old man has appeared in court charged with murder of, was his partner or just he murdered…?

Si: You know, I was reading the article as well and I couldn’t see it. Now the name is different. I don’t know if they…what the state of their relationship was, and I haven’t read enough around it to be able to comment sensibly. But essentially the guy had provided an alibi to the police that he was online at the time live streaming a play through of the game.

And it turns out that said livestream was pre-recorded. Oh, he said to the people in the chat of the livestream that he was having some technical difficulties and therefore he couldn’t respond to chats in real time, and thus got around the little niggle of having to respond to people talking to him in real time. He said that at the beginning of the livestream, did his pre-recorded livestream and played it back at the time this murder was committed, giving himself what appeared to be quite a good alibi.

Turns out that not only has he now subsequently admitted that, you know, it was a pre-recorded live stream, but that somebody, somehow figured that out. Whether it was a forensics examination of his device or whether it was a forensic examination of the video, I have yet to find out. But it’s a really exciting concept.

And it’s quite funny because I saw this a few days ago and it passed my mind and I thought, “oh, that’s interesting,” and then it slipped my mind. But before I sent it to you there’s a program here on the BBC called Death in Paradise, okay.

It’s a detective series essentially predicated on the concept of what’s called a ‘locked room mystery’, whereby basically there are a number of suspects, but none of them technically could have committed the crime because they were all busy being somewhere else, or the room was locked and there was no way they could have gotten into it and al the windows were shut.

And that kind of, you know, ‘how did they do it?’ kind of thing. And last night’s episode (without wanting to spoil it for anybody who hasn’t seen it yet, who might wanna watch), part of this guy’s alibi is that he is on a livestream Q&A about his work, his book (he’s a criminologist), at the time that the murder would’ve had to be committed. And it sprung it in my mind. And I was saying to the family who were watching with me, I was like, “oh, that’s really interesting because there’s a case that’s just been done on that.”

Now it turns out that actually he was live streaming it, he was just live streaming it from another location at the time that had been made to look like his original location where he claimed to have been live streaming it from. So it was a bit of subterfuge in that regard rather than the prerecorded live stream. But I did…it jogged it in my memory and that’s why I sort of sent it over to you as an interesting concept.

But it’s…the amount of times actually now that…I mean, I’ve done cases where people have tried to use digital evidence as an alibi. I can remember it happens often with phones. People are like, you know, “I was somewhere else, here’s my GPS data, you know, you can see that, you know, please pull the sales site. I was 15 miles away at the time taking a phone call from a granny. You know, can you, can you cross reference that and show that I was somewhere else?”

Which is…“the device was 15 miles away at that point in time, I have no idea where you were.” Although somebody unlocked it and, you know, that kind of thing. I’ve had someone come to me, although I didn’t do the case, but somebody came and said, “look, I was busy playing FIFA on my PlayStation at the time this happened. Could you please show that I was logged in?” And again, it was like, “well, yeah, we can show that somebody perhaps was playing FIFA using your account, but I can’t prove it was you.”

Although it’s an interesting question that, you know, one must have a certain playing style that is detectable through a controller if you were to do an enough analysis of it that it would be able to tell…you can, I know you can do it with typing. Back in the day, I remember some programs that could identify who was sat at the keyboard on the basis of the typing cadence, because everybody types differently in the same way as we have different handwriting, we have different typing styles, different hands on keyboards.

(Although I change keyboards so often, I suspect that it’s hugely dependent upon device being used, and thus it confuses things. Back in the day of corporate environments where everybody had exactly the same keyboard, maybe it was a more effective thing.) So I assume you could do similar things with a PlayStation and the controller and pick up a sort of playing style. (Mine is just terrible.)

So it’s not unusual to come across the idea of alibi verification. I remember one the case I did do was actually (I strangely, I thoroughly enjoyed doing), was a guy had been accused of domestic violence, domestic assault. And he had provided an internal CCTV camera of his lounge of him at home for the whole evening. Okay. And it had been uploaded to the cloud, so we knew that the timestamps were pretty accurate in so much as they hadn’t been tampered with for him to do it.

So, we knew that the time was okay. I watched this…so I got something like 10 hours of footage of this from whatever time at night, but he slept with the television on, and you could see the TV in the screen, in the CCTV capture. And I could track through all of the programs that he was watching (or that were coming onto the television) at the various points in time during the night. I could cross reference them with the Radio Times listings for the day. (So the Radio Times is a magazine in the UK which says what programs show when and there’s archived copies, so I could cross reference all of the television programs going on.)

I could see that he was there the whole time. He got up and went to loo in the middle of the night and then went back to bed, and you could see…and unfortunately hear that process going on through the CCTV speakers. So, you know, this kind of alibi building is a strong thing. And I’m just intrigued, like you, you know, how did they…they’ve managed to pull this one out and identify, it’s fascinating.

Desi: Yeah. So it’s interesting. That doesn’t seem like there’s too much information. And while you were telling that story, I read a few more articles and they all seem the same thing. So, all it says is that police determined that it was fake, but then halfway down each one it says that he then admitted to be faking that stream.

So, it’s kind of the question of was it just a hunch and they provided enough pressure to…with other evidence for him to admit it? Or did someone actually do forensics on it and is like…it’d be interesting to know what the forensics was to determine that it was fake. Or how he even was planning on doing it.

Like, I imagine if it was pre-recorded and uploaded, there would be evidence on the stream…because it was on YouTube, there would be evidence on YouTube servers to say, well, potentially, I don’t know. Like it could be steady code is nothing there.

Si: Yeah. So the article I have says “DCI Neil McGinnis said technical examination by cyber experts indicated the footage was pre-recorded.” So, that does kind of imply that it was an examination of some form that did it. Whether it was…but still, I don’t know whether that examination was of, you know, if they pulled his laptop and examined it and found the pre-recorded stream, great. I mean, that’s really easy. The question is, you know, if they pulled it from YouTube then what was the giveaway? What was the thing that showed it to be false?

Desi: Right. Because if they found that the time created timestamp was before when the live stream was, then that makes sense because the timestamp would’ve been like these ones that we do like the timestamp, but the full recording will be after we finish this session because it down…like, it uploads everything, and then you can download it and the timestamp would be different on our computers.

But yeah, I just have a feeling like reading the article, it seem really cool the way they wrote it, but then I’m just, like, they probably just did get his laptop and look at a timestamp and they’re just like, “our cyber experts figured this out”, and like, good on them. Like, they probably had a hunch and went and got a warrant and they’re like, “we don’t reckon this is…”

Si: Oh yeah. I mean whatever the net result is, I mean the thing is, it’s like another quote from the prosecution here is that “the suspects had devised a sophisticated calculated and cool-headed plot and was capable of deception beyond the imagination.” Quite a…

Desi: Well, like, I think that’s actually true. Like, it’s a pretty interesting way to provide an alibi because it seemed like it was a pretty popular YouTuber. So, it seemed like a pretty interesting way to create an alibi for yourself in a non-traditional sense.

Si: Yeah. And, you know, it’s quite…it’s something that you tend to think of as requiring your presence and you have a lot of people being a witness. I mean, it is almost television plot worthy. You see, you know, literally just described the plot! So yeah. It’s definitely…I mean, you’re not gonna argue that this was a crime of passion and not premeditated, are you? You know, it’s gonna be a difficult one.

Desi: I mean, is this…although I will say, so he was baked a grand theft auto live stream. And if this was in America, which this wasn’t in America, it was UK, I think?

Si: It was Ireland, Northern Ireland. So yes, UK.

Desi: If this was in America, Congress would be talking about how violent video games are causing people to premeditate YouTube streams and kill people. And they’d be trying to ban a whole bunch at the moment.

Si: Total digression. Do you think that there is any link between violent video games and…?

Desi: No. I think people probably have a predisposition to violence and if they’re presented with violent material (not necessarily video games), then they probably will have a stronger emotional reaction to that. And they may go out and enact that or premeditate something. But I think it’s a…I don’t think the way they sell it as ‘violent video games causes the general populace to have an increase in violent tendencies’, I don’t think that’s accurate at all. I think people are just predisposed…the same as people are, like, some people are predisposed to like addictive behavior for gambling or drugs or alcohol or anything like that.

Like, if they’re exposed to that predisposition may cause an issue for them. But in general, like, millions of people drink alcohol and don’t have an issue with it. But there will be some that get addicted.

Si: I’ll tell you, you know, even my alcohol consumption levels are probably too…or have been too high. I’m not gonna…but it’s not quite a dependency. But I stopped drinking in January. I’m still not drinking. I haven’t had a drink since New Year’s Eve.

But actually it’s not until you stop and decide that you are not going to, that you realize how bombarded you actually are with the concept of drinking. It’s quite weird to step back and sort of go, “actually, you know what, no, I don’t need this bottle of wine with my meal deal today. I don’t need, you know…” And just on television, everybody is either having a beer or chucking back a whiskey or, you know, whatever it is. And it’s like you’re sitting there and you think, “oh, that’d be nice, but I’m not drinking.” But everywhere! Everywhere!

Desi: When I was 19, I went backpacking over to…I went to, like, South Africa and Egypt and Jordan. And it was awesome. And when I left I decided that I was gonna go vegetarian because one, through some of those regions, like the food quality (especially Egypt) dropped quite a bit. And food poisoning is quite common amongst international travel travelers.

And I also decided to stop drinking because I was a student on a very tight budget and alcohol was just a cost that I didn’t wanna wear. But it was so funny when I caught up with people and they’re like, “oh, let’s all have food”. And I’m like, “yeah, cool. Like, I’m vegetarian,” and they’re like, “yeah, no worries, like, that’s cool, we’ll just order a vegetarian dish and like share it and all that”. But then they’re like, “oh, what do you want to drink?”

And I was like, “oh, I’m not drinking”. They’re like, “what’s wrong with you?” Like, it’s so pervasive in most cultures. Like, not Middle East cultures because generally they don’t…well they’re dry countries in general and most people don’t drink. Just the people I traveled with were like every single night they were like, “what’s wrong with you?

Why don’t you just have a drink with us?” And I’m like, “I’m fine. Like, we can chill and just hang out, like, we don’t have to…I don’t have to get drunk to have a good time.” But it, yeah, it’s funny and it…you’re right, like when you think about just having wine with dinner, or when you go out with friends, like, you go to the pub and it’s so common to just get, like, a pint.

Si: Yeah. Well I was gonna say, it’s a good place to finish on, and yet I still have just a couple more questions that are relevant to the podcast. One is: Jamie has graciously agreed, and I will be going to the (oh god, he’s forgotten what it is), but I will be going to a conference this year with Forensic Focus to go to…sorry, how do you spell forensic?

Desi: This is Si saying, if you’re there he may be hitting you up for a conversation.

Si: This is Si saying a couple of things. Yes. (So event calendar, this is much later on in the year than I was going to be.) I was thinking of going to DFRWS in Bonn, but I will not be doing that now. I will be going to…where is it? There it is: the European Interdisciplinary Cybersecurity Conference in (oh god), Stavanger in Norway. It’s 14th/15th of June this year, and I am thoroughly looking forward to it because I’ve never been to Norway. So that’s it. And I know that you spoke to Jamie as well. Have you got that confirmation yet?

Desi: So, no confirmation yet because Australia is very far away. I’ll maybe at a conference in March and we’ll talk about that in the next couple of weeks if that goes ahead. If not I’ll definitely be around the traps in Australia and New Zealand. So there’s a data security conference in Wellington, I believe, and then there’s the IAFS where they’re presenting the new model for digital forensics as a more robust discipline in law enforcement and courts.

And that’s an international endeavor, and that’ll be in Sydney. So I’ll be there for that one. And then DFRWS later in the year, plus whatever other conferences that are around I’ll be at as well.

Si: Yeah, I’ll be doing the forensic expo in Europe in (that’s in London, so I have no excuse not to go to that). I did see another one that was in Birmingham, but then I figured out that it was in Birmingham…

Desi: Wrong Birmingham?

Si: Yeah. Birmingham, Alabama, which is like, “no, I don’t think I’ll be going to that!” I was like, “oh, Birmingham, that’s a couple of hours away”, as opposed to, you know, across the Atlantic! But yeah, so I’ll be doing that. But actually coming up week after next I’m gonna be up in London (and there will be a report on this on Forensic Focus), but the…I’m attending a competition.

I’m supporting some students from Warwick University who are in the Cyber 9/12 Challenge. And it’s a policy based challenge. So, basically they’ve been given a scenario they have to evaluate and decide what policy kind of things they’re going to implement to try and control the situation, as opposed to technical things that they’re going to do.

I mean, there are some technical aspects as well, because we’ve got a real (you know, almost a real) world vulnerability given to us that’s based around the Microsoft Exchange Vulnerability. And so we need to figure out how we’re gonna manage that and how we’re gonna manage the wider picture.

Desi: Is that the, you know, 2021 vulnerability, the kind of, like, hit everyone?

Si: Yes. It’s a spinoff of that. It’s a fictional scenario. So, essentially, it’s, you know…because there are patches out for that. But this is a sort of a step up and a theoretical…so, we’ve got theoretical adversary and a group of disaffected oligarchs who wish to get their money back, seeing as it’s been seized by sanctions called New World Order on one side.

And we’ve got this vulnerability and then we’ve got various things that we need to protect on the other. So, we’re sort of building this scenario of…and policy things. And they’re very talented young students. Before I name them (I won’t), but I will get them properly in the write up when I have their permission to do it. And that’ll come up out on Forensic Focus.

But it’s a really good thing, it’s open to universities in the UK and I thoroughly recommend that anybody who’s interested in giving their students a slightly different experience in terms of cyber, this is a competition to be involved in. So, I’ll be up in London, I’ll be wandering around the BT Tower, which is an interesting landmark for us in the UK.

Desi: I think that’s a really good challenge by the sound of it though, for a lot of people who wanna be in cyber but don’t wanna be technical as well. Like, because that’s…a lot of the time…

Si: It’s really important.

Desi: Yeah. People ask that question. They’re like, “I really want to be in cyber”. And, like, some of the best incident managers I’ve ever seen are non-technical. They’re just really good at organization, like organizing people, organizing technical people. Because Si and I are not easily organized, that’s for sure!

Si: I’m not easily organized. No, definitely not.

Desi: You’ve seen our rooms, like, we’re definitely not organized!

Si: But yeah. It’s all alphabetical order, honestly.

Desi: Chaotically organized. That’s how…

Si: Chaotically organized. Yeah. It’s the old adage, isn’t it? It’s just like, you know, “if a cluttered desk is a sign of a cluttered mind, what the hell is an empty desk a sign of?”

Desi: Yeah. Before we do sign off, I will say if you guys enjoy what we do please consider liking, subscribing, leaving a comment on the platform of your choice that you listen to or view us on. We do check them. Jamie’s really interested in kind of tracking, kind of, any kind of feedback that we have. And yeah, obviously any of the conferences that were at, like, I definitely have ran into people already since we started. So, that’s always funny.

Especially when people come up to me and they’re just, like…just start talking like they know me and…which is completely fine, like, I’m alright, but in my mind I’m like, I’m stressing so hard, because I’m like, “have I met this person before? And should I remember something about them?”

And then they’ll finish with, “oh yeah, that episode of Forensic Focus podcast, that’s what got me thinking about this”. And I’m like, “oh, okay, cool. You’ve listened, know who I am, but, like, I’ve never met you before.”

Si: Oh, what a relief. Yeah, I was gonna say, I…

Desi: Yes. Please stop freaking me out and start with that, that you have seen me on the podcast.

Si: Yeah. Good. Yeah, I’ve had a few people say that they’ve enjoyed episodes but people I already know who’ve told me they’ve enjoyed episodes, but I have yet to meet someone who’s listened and I don’t know. I look forward to the opportunity. Yeah. So yeah, if you do…if we are around, you know, please do come up and say hello.

Desi: Yeah, definitely.

Si: And any feedback is very welcome. So yeah, please do. And yes, please leave feedback. And you know, if you have feedback that’s not positive, we do want to hear it as well. Let us know what you think. Let us know…and also any comments you have that relate to the things we’re talking about.

If you, for example, work for the Northern Ireland Police and happen to know what exactly happened in this case and can enlighten us a bit more as to what’s going on and how this has panned out, I would be genuinely fascinated to know. And yeah. Just so, like that. Fantastic.

Desi: All Right. I think we’ll leave you there with this week’s episode. All the notes will be in the show links like normal. And we’ll see you guys again soon.

Si: Excellent. Fantastic. You take care.

Desi: Thank you.

Leave a Comment

Latest Videos

This error message is only visible to WordPress admins

Important: No API Key Entered.

Many features are not available without adding an API Key. Please go to the YouTube Feeds settings page to add an API key after following these instructions.

Latest Articles