Combating The Rise Of AI-Generated Child Exploitation Material With Heather Barnhart

Si: Welcome friends and enemies to the Forensic Focus Podcast. Heather Barnhart is joining us from Cellebrite and we’re very grateful for having her join us today. I’ve heard you talk on geolocation, actually. You gave a talk at a conference on that and I think Desi’s heard you on several things.

Desi: Yeah, the first one that I heard you on was, the SANS course on memory forensics, which I don’t think…it no longer exists, but, I remember you being very excited about memory forensics and that just, like, sparked quite a lot of interest into cyber for me. So, very excited to chat to you today as well.

Heather: Thank you for having me.

Si: No, it’s an absolute pleasure. And I mean we’ve both been suggested a list of talking points, but I saw recently a Twitter post from you to do with image identification as to whether it was AI or not. So, before we kick off into what is a largely AI and thing…do you want to give us a bit of your background and and, and introduce yourself and let us know how you ended up in this twisted field that we will call home?

Heather: Sure. I love how you say making friends and enemies too. So, hello, friends and enemies! I love that. I have been doing digital forensics now for 21 years. A really long time. And I never wanted to do this with my life. I thought it was a short stepping stone and now I cannot imagine my life without it. I truly don’t think…like when people talk to me, like, “when do you want to retire?” “Never”. They’re like, “oh, you should retire in 20 years”. “No, I’m not going to do that. That’s insane!”


Get The Latest DFIR News

Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month.

Unsubscribe any time. We respect your privacy - read our privacy policy.


But I am currently the senior director of community engagement at Cellebrite, and my biggest goal there is to create a place where people belong, whether it’s friends, enemies, newbies, seasoned professionals, that I just want to keep learning and actually be able to speak the truth behind the data, versus believing in the narrative. That’s my biggest thing. So, being a good message out and a better message inward to the product.

I am very active at SANS still, doing curriculum lead, authorship, teaching. I’ve worked everything from child exploitation, that’s actually oddly how I started my career. So it’s strange with AI, how it’s circled right back to it. But I’ve been really heavy in mobile forensics for the last…oh my gosh, since 2009. So, really long time. Really long time! Which I thought, honestly, I thought mobile forensics was a one year promise when I took that job. Like, “yeah, I’ll do that for one year”. The iPhone wasn’t even out, so I didn’t think it could be something big. And here we!

Desi: That’s interesting. You said that it was like full circle with…you started in, I guess, CSAM and then kind of back in this space now. So, from your experience, like, what’s the impact been that you’ve seen over time and then, I guess, covering on some of the topics that we’re going to do today is, how has deepfakes affected that?

Heather: Yeah, so this is where…the deepfake aspect of it, it’s wild because it makes it so much easier. Something before…when I started, it was more like the baseball trading card. Like it was harder for you to get images and people protected them. And I feel like almost…it’s a vanity thing now, like you want to host things and you want everyone to know that you are the one possibly behind it and you can create all these specific images.

So, I think the pride and ownership, because we can hide behind a computer and social media more so than we ever could before, we could all be fake, like right now, we could all be deep fakes here having this conversation. So I think AI is making it easier for people to do terrible things, but also easier for us to detect it, if we leverage AI. So it’s good and bad. But people use good things to do bad things. This is why we can not have nice things!

Si: What was a wonderful way of phrasing it!

Desi: So even when you…well, I guess you kind of stepped out of that space…but do you think there was a time where, it was still the baseball card thing, even though we had the more explosion of the internet and social media, people are less likely to host? It’s the deepfakes that is, you’re seeing people want to host more and boast about it?

Heather: Well, I think the deepfakes just make it easier. I guarantee, I could challenge my son, who is 10, and my stepdaughter, who’s 11 to come in here right now and get images of the three of us and put us on a beach together and they would be able to do it with Snapchat, even. Like the AI in Snapchat, they’d be able to take our picture and do these things.

So I think the technology makes anything accessible to everyone. And that’s the hard thing. You hear of students that created images of other students, and then they were sharing CSAM and all these things are like the person that took their high school yearbook and their little kid yearbook and created YouTube channels of inappropriate deepfakes. And then people see it, oddly and they’re like, “wait, that’s, that’s me from when I was 11. That makes absolutely no sense”. So I think the ability to create is so much easier because it’s not like you have to have a camera and a child. You can just create it.

Desi: Yeah, and I suppose you’re also getting that people may be creating these things without the thought that it is CSAM, like you…in some cases, so there’s just more material out there, I guess.

Heather: Yep. And it’s crazy, like, my involvement…so I worked with the National Center for Missing and Exploited Children for many years now, and it’s full circle again. And the amount of reports they could get from online child sexual exploitation is mind blowing. It’s every single day. And I said this so many times at RSA and since then, but one image can ruin someone’s life, whether it’s real or fake. And that’s where the research of real or AI came in because it’s tough. It’s harder than you would think. Especially if you take your own image and then you use GenAI to say, “hey, here’s this image, create other cool things of this”. And now we’re suddenly all at a beach having a cocktail together, but it’s really based on a real photo of us.

Desi: And then you’re potentially like just adding yourself to the database or if parents are doing that to their kids, their kids are then in potentially some database that then just be used by the GenAI.

Heather: Yeah, and that’s…I have people ask me all the time, new moms that do digital forensics too. And they’re like, “is it safe for me to put pictures of my baby on social media? Are people going to do terrible things with it?” And that’s, what’s awful. And then I do all these talks, like you have to know who your friends are. Don’t talk to strangers. Don’t put anything out there on the internet. But it’s true. It’s like the 1980s again. We need like a cardboard milk carton, like the kidnapping and you’re missing. It’s like, don’t talk to strangers. What’s your code word? It’s crazy.

Si: So what is the limiting factor really in telling the difference between something that’s generated and something that’s not? Because obviously there’s all sort of features in a photographic image, which aren’t present in a generated image: a sense of noise and possibly shadows going in the right direction and all sorts of…you know, five fingers on each hand. I know AI is getting much better at this, but what, what is actually the limiting factor? Is it the fact that social media is taking these images and then compressing them and then just removing all of this useful information for us? Or is it something more insidious than that?

Heather: So it’s a little bit of both. If you recover images from social media, that obviously makes it a lot harder because the metadata is stripped away from the background. But let’s say you get my computer or my mobile device, and I am the creator of the images. It is still really hard. And that’s the research that my husband and I just presented at TechnoSecurity. I, honestly, there were probably (I’d be lying) at least five times where I yelled at him for “why did you sign us up for this? You signed us up for this fresh research. We’re moving. I have no time to do this. This is ridiculous!” Because I couldn’t find the flag. I assumed there’d be some huge watermark or flag or something easy to identify AI or real, and sadly there wasn’t. So then when I took a step back, I started looking at, “okay, what does the real image actually have that’s missing from the fake one?” So gaps. The biggest thing I saw was camera make and model are gone. So it would just say like Apple Inc or Google, meaning the app stores.

So that’s a huge thing is what is the make and model of the camera. But if it’s on social media, you lose that anyway. So then you’re back to looking at eye wrinkles and do you have five fingers? But some of them are hard. And I had three images on the screen and asking people which one’s real. And they’re like, “well, I don’t know. The first one.” And the thing is, they’re all kind of real, but there’s something AI about each one.

Si: I was going to say it’s quite interesting because recently, there have been AI submitted images that have won photography competitions, which shouldn’t have won. But there was a reverse of this that a guy submitted a real image and won an AI photography competition just recently. Because it was such a good photo that basically didn’t look real, but it was, I mean, absolutely genuine. So we already have a difficulty in telling as human beings whether something’s real or not. So, you know, with removal of metadata and and the other sort of lower level information, it does become ridiculously hard. So did you carry out that research of sort of, “here are three photos, which one of these is real?” to a large sample set of people and what were the results that came out of that?

Heather: Yes. Well, okay. So a few things. One: most people couldn’t tell which one was real and which ones are fake. And then I showed them like, “this is the real one. This is the fake one. And this is kind of how you would know”, which is very difficult. I took the photos. Had I not known, like, it’s really difficult on which are real or fake. But one thing that scared me is when you poll an audience, I think we had 240 people in the room, when you ask them, what is AI, what is GenAI? What is machine learning? What does it mean? No one knows. We all know just enough to be dangerous. And I feel like everyone’s chasing it and wanting to be the thought leader of AI. And we’re all learning and clearly AI has been around a long time.

Like at Cellebrite, we’ve used AI for machine learning and okay, facial similarity. Is there a beard? Is the person a blonde? So things like that. And I even said, when people are so against this, our phones use it and they’ve used it. You can search for dog. You can search for child. You can search for beach and your phone knows because there’s databases in your phone that categorize that for you by machine learning. So it’s always around. It’s just, I think, honestly, the fact that it happened to Taylor Swift is terrible for Taylor Swift, but fantastic for humanity because everyone cared. Everyone cared!

If it were…if it happened to me or one of you, people would be like, “who is that person?” You know, we are celebrities, I know that, but seriously, like, if it’s a huge person that little kids adore, the public generally likes, she seems like a good enough human, and then all these horrible images appear, it caught attention. And I think it also caught the attention of law enforcement and legislation that there’s not much you can do about it, so people have to catch up.

Si: So, I mean, do you think that the solution is human? You think it’s not possible for us to either come up with the legislation…well, I mean, everybody breaks legislation anyway, so there’s not really much point in discussing that because there’s laws against CSAM now and that hasn’t stopped that as a problem. But I mean, in terms of, sort of technological controls, like, you know, enforcing watermark…or suggesting that we enforce watermarking on AI generated images. But you think it comes down fundamentally to the people and education rather than a technological control? Or is it a mix?

Heather: I think both, I think definitely both. So the report act that president Biden just signed when I was at RSA (actually the day I was taking the stage that was signed) and I had a friend that was on the panel with me, Terrence Williams, and I asked him, he worked at Facebook, and I was like, “why weren’t you guys reporting this before?” And he was like, “we didn’t have to.” And then on the, and it makes sense if you think about it on the back end, they have to then have experts that go through and say, “is this AI? Is this child exploitation? Is it child erotica? Is it inappropriate? If it’s fake, is it inappropriate?”

And I spoke to Forbes magazine (or sorry, just Forbes, not Forbes magazine), and that was the whole thing. And we started going down the path of TikTok accounts where it’s child erotica, and I explained to her what child erotica was, because that’s the stuff I worked back when I was 22 years old in my original investigations. But if you look at the people that follow these accounts and the comments, it is horrible, but then some people were like, “it’s just AI, it’s beautiful.” It’s not beautiful. It’s children in inappropriate positions and adult garments that they shouldn’t be wearing. So, it’s completely inappropriate.

And then I was on an AI panel at TechnoSecurity (again, second time I’ve mentioned that I should get, like, a special T-shirt from Techno!). But I was on an AI panel with our…Cellebrite has our philanthropic partners. So Exodus Road, Matt Parker was up there and representation from National Center for Missing and Exploited Children and Raven. And Raven deals more with, like, the lobbyists and the legislation and trying to get more things in place, because I think the more laws that are in place to stop it will obviously help.

Exodus Road is more finding the victims, freeing them, helping them, stopping, like the rings that are actually doing this stuff. And then NCMEC is there to receive the reports, help with the tips, help educate and help support. But it’s a giant disaster. It really is. And people are like, “oh, there’s not much you can do.” But if you do nothing, it’s not helping anyone. So if you do something and it helps a few people, it’s better than nothing.

Desi: So what are the kind of challenges then with…because I guess with traditional CSAM, there’s obviously the victims and you can trace that back to the families and that, but with the deepfakes, it’s taking images that are already out there and then creating this stuff. So what are the challenges that law enforcement internationally are facing to try and, like, even without the legislation and crack down on this? I guess, from your knowledge, not just within the States, but in other countries as well.

Heather: Well, I think one big thing too is where is everything hosted? So I know in a lot of the work that I’ve done, anything that’s, like, Cloud hosted or associated to an app that’s somewhere that’s not in your jurisdiction and you’re in Europe, that could be a major issue on, can you even look at it? Can you report it? How do you support those things? So I think that is one thing, but finding it and reporting it. Like some people just see it and they’re like, “oh yeah, I saw this horrible thing. It was weird. And I just pressed on.” So, the detection, the reporting and then understanding: is it real? Is it not? But is it real or is it not, does it even matter if it is CSAM?

Desi: Yeah, right. Just gets reported.

Si: Well, I was going to say in the UK, the law states that both generated and real images are both an offense anyway. Is that the same in the US?

Heather: In some places. So I learned it’s state by state, it’s different. And I didn’t realize this. I even learned…I had a SANS student once, and she was telling me that if it’s a live image on an iPhone versus just a photo, that’s worth 10 versus 1. So, if it’s a regular photo 1, but if it’s a live image that you can press your thumb on and see that counted as 10 images, I was like, “first I’ve ever heard that.”

So that’s the issue. I think state by state is different. Jurisdiction matters. No one knows how to properly report it or what to do. And I’ve asked several friends that I have that work major ICAC crimes at FBI, “have you seen AI yet?” And they’re like, “I don’t think so.” And that’s something else we polled, “who in here has seen AI?” and they didn’t…like in a CSAM or ICAC investigation, and they didn’t raise their hand. And then we said, “how many people here don’t know if they’ve seen it?” And almost everyone raised their hand. Because they’re not sure what’s real and what’s not, and then what to do about it either way.

Desi: And so for our listeners who might be interested, like, what is a good way of going about and reporting this? Like, is it worth reporting it on…like, they see the apps, they can, you’ve got the little report bar and you go, this is, offensive material, could be, like, abusive to whoever it’s made by. Or is it worth figuring out who, within your country, you should be reporting this to? Because, like, with Facebook, like, do they have a dedicated team? But then you’ve got TikTok, which is kind of in the air with the whole going through, was it Congress? I think, or the Senate inquiry that they had so, and I’m…there’s probably like many more apps that this is on as well.

Heather: I would recommend reporting it where you see it, but also taking it a step forward or further if you can, and go to takeitdown.org. And it’s on NCMEC’s site. And it can be AI. It can be real. It can be you. It could be your child. It can be anyone you know. And it asks you several questions like, “do you know this child? Do you know how old they are? Do you know who created it?” So you answer questions, yes or no. And the best thing is you point them to the image and they hash it. So they don’t actually upload the image, it’s not going anywhere else, but then it’s taken down. And are there false reports? Absolutely. But think of how many positives could be reported.

I didn’t know about this site until right before RSA, I think is when I learned it. Maybe January or February. So, there have been times when I’ve seen things where I’m like, “ew, that’s disgusting.” And if I would report it straight to Facebook or Instagram or whatever, great, but I could have done something more probably that I didn’t even know existed. And there’s one for adults as well. And I think it’s out of the UK and I can’t think of what the name of it is. I’ll have to look it up, but it’s takeitdown for adults. So if you find something that could be used as sextortion against you, you can say, “hey, I did not approve of this inappropriate image of myself and I want it to be taken down,” and they will take it down.

Desi: Maybe StopNCII?

Heather: That might be it.

Desi: I was just doing a quick search. Stop non consensual intimate image abuse. Yeah, it looks like you can submit as well. But we’ll add those…for those listening, we’ll add both of those links when we confirm as well into the show notes as well, if you’re interested in checking that out.

Si: Yeah, we’ll go and do a bit of a search, see what we can find and share as many as we can!

Desi: So, I guess another point to that is also…because you were mentioning the…just people not being aware of what is considered, I guess, abuse material. So where would people go to learn that as well? Like, is there one of your talks that you’ve done that people go to watch a recording, or is there something else that people would refer to? Because I guess you need to be able to see it first before you know what you’re looking at.

Heather: So you mean to learn what inappropriate material is or…?

Desi: Well, even like we were talking about the TikToks, right? And then the GenAI, like it’s young girls, like TikTok feeds are curated. So maybe only one in a hundred, you’re coming across that. So, just for people to be aware of what is inappropriate.

Heather: Yes. You know what I’m pulling up the article right now, and she gives really good examples of…okay, the title of it is so bad, but…it is. It’s terrible. It’s “I want that sweet baby”.

Si: Yeah. Okay. That is a bad title. I agree.

Heather: Yeah. It’s a bad title, but you can also look up “AI generated kids draw predators on TikTok and Instagram”. That’s a cleaner thing, but the title of it, she got your attention. She took one of the comments from this AI generated, and I don’t know if you want to specifically call out the TikTok page. I don’t care. I’ll tell you exactly what it is if you want to. It’s gross.

Desi: Well, I’m surprised it’s still up if it’s garnering attention.

Heather: And it’s all child erotica.

Desi: Oh, is this the AI generated kids one? Is that the Forbes article. Yeah, cool. We’ll add that to the show notes as well for those that want to have a read.

Heather: And I wonder if they took…

Desi: Maybe it’s migrated to a different account.

Heather: Yeah, it seems like it got some attention, which is good. Now it doesn’t seem like it’s existing there, but I found some others too, and I found some names of the people that were following them. And we started to try to focus on that because it seems like digging into the followers of these accounts could be interesting. Especially for law enforcement or anyone that wanted to chase it down.

Desi: Yeah. Well, I guess it gives you visibility of other accounts then, because if they’re following one, they’re likely to follow multiple and being out of forms and stuff.

Heather: Yes, they seem to be. And then I wonder how many of those are real though, right?

Si: That was my follow up question, is given that, I mean, on my completely decent, although very boring Instagram account, I constantly get, you know, junk mail coming in saying “buy 20,000 followers for X pounds”, or whatever it is. And the more popular your account is, the more followers you get, the more your advertising revenues go up, blah, blah, blah. I wonder if…I wonder what the proportion is. Is there anything that you’ve had the chance to look at that yet? Or is it still just the next stage of research?

Heather: Pretty much the next stage of research. So, I think I’m going to keep diving into this because the topic keeps coming up and I feel like…I honestly wasn’t sure at RSA if I was making a huge mistake talking about AI and CSAM and sextortion and deepfakes, or people would take interest in it. And it has, like, come back full force to me. And I was like, “oh, okay, chose a good topic. Clearly it was relevant and everyone seems to care”. So, I’m going to keep researching it. Even to the level probably of app teardowns to see if you find a specific app that you think created a photo, all the ways to tell if it was AI or real and where the original images came from.

Si: I was going to say, I think it’s interesting, and I think you’re right in the fact that people aren’t necessarily clear upon the legalities of stuff. I mean, we’ve been in the industry long enough and Desi’s been hanging around us long enough that it’s very clear to us what is and isn’t and what does and doesn’t constitute a crime. But the idea of somebody going, “well, I didn’t harm any children, there’s no thing. I’ve just stuck a prompt into an AI generator, and now I’ve got this image.” And in certain countries where…and Germany recently decriminalized possession of CSAM, which is a horrific idea, for the record. But you know, under these circumstances, you’ve got a country saying, “okay, you can own child pornography, and child abuse is wrong, but you can own it.

So why don’t you go out and generate your own”, effectively. I think it’s a real issue that this…and Desi fairly asking, you know, how do we identify it? And how do we report it? It’s actually a challenge at lower…certainly at the lower end of the spectrum I mean the the top end is very obvious what is and isn’t. But the lower end I think it’s not clear to quite a lot of people what actually constitutes child exploitation as a concept.

Heather: You’re exactly right. And even the journalist I spoke to at Forbes, when I said child erotica, that was a new term to her. And she’s like, “what does that mean?” And I was like, “well, it’s not full on CSAM, but it’s inappropriate children in positions, even looks on their face or activities they could be doing.” There’s this mom on Instagram and TikTok, and she posts her daughter. And she gets a lot of flack for it. I don’t know if you want me to say the name, but…

Si: I think erring carefully. We’ll let the listeners use their imagination.

Heather: Yeah, they can look. But she’ll…

Si: I’ll give an example in a second as well.

Heather: So yeah, she’ll hand her daughter who is three years old, a popsicle and she’s in a skimpy, skimpy bikini. And then she takes a million pictures of her daughter, like eating this popsicle. And it’s the way it’s proposed is so inappropriate. And she may be doing it for likes. Her daughter’s adorable, but think of the people that are looking at this stuff. That’s where I’m like, “come on! What are you doing?”

Si: I had quite a similar case. Hang on a second, I don’t mean to be rude or to suggest it’s only an American thing but the idea of beauty pageants in the US where children are being dressed up and made up and…

Heather: JonBenét Ramsey.

Si: I had a case of that, that I dealt with a lot of CSAM related to that. And the images went from what you expected to see as the sort of publicity shots that you see on television for the beauty competitions reality TV shows that we see and it just got worse and worse and worse as it went on. So yeah, the start of it is the sooner we nip it in the bud, the better is my opinion on this one, so…

Desi: Well, I was…it’s an interesting point there in how child erotica is…this is the first time that I’ve heard that term and it makes complete sense. But if you look at media, traditional media, TV and print, they’ve…children have been exposed in that sense for…since it began. Like, under 18 minors, particularly, like, the music industry, like Si said, the beauty pageants, so have you seen any…because I think that’s a societal problem, like, we’re just desensitized almost to that because it’s been used as a marketing tool. Is there any education happening at the moment or where you’ve seen or research into it around the ethics and teaching people that, like, child erotica itself is a part of a broader problem of CSAM and from a young age, we’re potentially trying to teach kids that ethically doesn’t matter what the legislation is, that it’s…?

Heather: So there’s a few things here. One: I mentioned the Exodus Road. Matt Parker’s co founder of that. And he actually emailed me this morning to remind me. There is a training available. It’s called Influenced. And right now it’s offered only in person, but I am…as soon as he mentioned, I was like, “oh, we got to get this word out there”, and it’s going to be online, but it’s education on what everything is, like, what is CSAM? What is child erotica? How to prevent it, how sextortion happens, how trafficking happens, like how easy it is to go from a normal conversation down to like, this crazy rabbit hole. So that I think is more designed toward parents.

However, I started even RSA, like my son joked with me when I mentioned how he thought Mike Tyson was following him on Snapchat and this blew my…I was like, you’ve got to be kidding me. I’m driving. I’m looking back, like doing the arm over the seat lecture. I’m like, “are you serious?” I was like, “one: if Mike Tyson’s following you. That’s creepy. And that’s not okay. He doesn’t want to be friends with you, Jack.” And I was like, “but two: how do you even know it’s Mike Tyson, Jack? It could be someone that’s fake. This is exactly what I’m talking about!” So I gave him the whole sextortion talk. And then even from the other side, like if someone…his classmates sent him an inappropriate photo, he has to tell me immediately because I was like, “Jack, that could turn into the police knocking on our door and saying, your son has this image and he created this.” I was like, “and then I can’t defend you.” So I think you have to not be afraid to talk to your kids and tell them this is what happens.

And back to Taylor Swift, like my seven year old, she’s like, “mama, someone put really mean pictures about Taylor Swift online”. And her concern was Travis Kelsey was going to break up with her. Like, Natalie! But she knew it happened. And she’s like, “they were fake pictures and they were mean and they showed parts of her body that people shouldn’t see.” And I was like, “you’re correct. It’s inappropriate”. And I think just talking to the kids about what is appropriate, what is not, what do you see, when they have to tell. I think that’s going to be our best next line of defense. Because they’re the ones that are going to have it happen to them, unfortunately, and the ones that are going to be the creators.

Desi: Yeah.

Si: I was going to say it’s an interesting point that you bring up about that underage sharing of images, which to them may not have any sort of malicious intent or anything other than curiosity. But the bottom line is, is that is then production and distribution of a CSAM image. And therefore, you know, like you say, if the police come knocking on the door, it’s a case that needs to be made. Now, one would hope that the court has some common sense, but my experience of courts is not necessarily always that way.

Heather: Yeah, and I don’t know if you all are seeing this, but I’ve helped with some cases where someone gets caught doing something. They took a photo of someone, they shared it with another person, and then they delete the app and they delete the picture. I’m like, “why did you delete it? It makes it so much harder for us if you delete it, let the truth be there and let us say what the truth is” versus trying to scramble and now it’s someone’s word against someone else’s because everyone deleted all their things from their phone.

Si: Yeah, the…it’s harder to work with less evidence. I mean the same for every aspect of forensics and digital forensics and meatspace forensics and all of it is is that the less information you have to work from to create your picture the less full that picture is and the…so, yes. And people and people do and you know, they do it because they’ve used the app and they’ve got bored and they’ve thrown it away or they panic and they delete it or all sorts of things. But yeah, essentially evidence is ephemeral and, or some evidence is ephemeral and you can’t get it back.

Heather: I love the word ephemeral. I don’t love what it does to us, forensically, I love the word. And you know what? You just motivated me. Now I want to…I wish I knew the proper platform to even do like little mini series on talking to kids more than just going…I’ll go to a random school, a high school and talk about their digital fingerprint and how everything they do can wreck their future. But just to remind kids on, “if you’re asked to do this, this is something you should do instead”. And there’s…what is it called? The NoEscapeRoom.org But that’s also more for parents to see within 10 minutes, how their kid can easily be sextorted. And it’s terrifying. It’s like a choose your own adventure and you choose the answer that you think the kid will say. And then it tells you essentially how it’s going to go wrong anyway. It’s like, there’s no good to come of it, but I did it honestly. I’m thinking, “okay, this is what’s going to happen”. And it’s terrifying, but no one’s really talking to the kids, except trying to educate parents on this is what it is.

Desi: You could create a “Heather teaches digital forensics kid” YouTube channel, and you can have like a little stuffed animal and talk to kids and it’s like…I don’t know whether the US has like playschool, but we have it in in Australia and it’s like, they go through and tell them stories that way. But, yeah, if you’ve got the kids channel, then it gets pushed out to the kids. Do it that way.

Heather: I should do that.

Si: I think, like you said, you know, you are going out to do education in schools. I’ve seen other people doing it. Some of my colleagues go out and do it, who are better at talking to children than I am. They tend to run away from me for some reason! But I think it does need to become part of the curriculum, fundamentally. Now, we accept that we’re going to teach them scratch programming because everybody needs to be a programmer, but we don’t teach them that, you know, posting stuff on social media is not that great an idea. My children have been terrified over many years of, you know, terrible stories, so they’re very well behaved, or at least they appear to be, at least to me.

Heather: Same. I probably share too much, but I want them to think “if you post on YouTube, someone’s going to know where you live and they’re going to come here and get us”.

Si: Yeah! There’s pretty much that level here as well. And, you know, it’s an interesting one though, is where should that responsibility lie? I mean, ultimately, all responsibility lies with parents to look after their children. My feeling is that part of the issue with society is that parents don’t take enough care…that, I mean, that’s a broader conversation to be had. But the thing is that I don’t think parents truly understand.

Heather: I agree.

Si: And then they’re told that they need to talk to their children about it. And then they have a conversation going, well, “please don’t buy extra things in Fortnite because I can’t afford it”. And that’s the bit they understand where there’s a financial implication. But like you say, you know, you go into a scenario and you follow through the questions and suddenly the computer has been owned and the camera is on and it’s watching you in the middle of your bedroom while you’re asleep and everything goes, well, to use a phrase, wrong. So yeah, it’s an interesting problem where technology is advancing faster than knowledge is advancing in the general populace.

Heather: Yes.

Si: Well, it’s advancing faster…knowledge is increasing in me as well, I find generally speaking, you know, the amount of research that we do…I do now to take on a new case is quite astonishing. It used to be that you would do it on the basis of what you’d done before and now it seems to be a research project from the outset.

Heather: Every single time. I agree. Every case I touch, I’m like, “here we go again”. It’s true. It’s true! I don’t know, I think today…one: kids, I think are more tech savvy. We are different because we do this for a living. But most of my friends have no idea how to even set up parent-child accounts on devices. So their kids just have full access to everything. They don’t know how to get into it. So I think there’s that ignorance, but I also think we live in this bubble of, “oh, I can’t talk to my kids about that”. Like you want to pretend it’s not going to happen, and then it does. Which is unfortunate.

Desi: How do you see the…I guess, like, because this seems like it’s quite a human problem in solving a lot of this stuff. And just because we’re chasing our tails with technology, as new apps come out, new tech…like I saw a new technique, it was coming out for Adobe, the whole hand thing with AI, how there was always like seven fingers, and it was kind of all messed up, and Adobe’s like, “oh, we’ve solved the problem”, and it’s like, you can get much better photos now, which is great for whatever you’re legitimately using it for, but it means actual material like this is easy to generate. How do you feel the role of digital forensics will play into this going forward, knowing kind of the state of play now with how hard it is to use digital forensics to really tell the difference between the two?

Heather: I feel like at this rate, digital forensics is all we’re going to have because everything is created using electronics. I think our old methods of detection will be great and help us accelerate to get access to the evidence faster, but with the way technology is going and even AI built into our tools, we may actually get new insight into older cases and things that we’ve worked. Like, I’m not going to lie, I think back on (and this has nothing to do with this topic) but some of the counter terrorism investigations and stuff I worked, the fact that you can now check a button for weapons to be detected, that would have been glorious. I was going through looking for guns and hand grenades and machetes, you know, like, you name it. I was going through every single picture looking for those. So I think the technology is advancing to keep us up.

But my greatest concern, and this is something I will always say is people will rely 100% on the tool and not validate. And that drives me crazy and terrifies me. And yes, I work at a vendor and yes, I teach and any place I ever say is “trust me, but validate, listen to me, but find it yourself”. Like, I’m recreating an entire murder investigation on this phone just to make sure that what I think is true is true. Like, the tools can tell me one thing, but I want to validate.

Si: I think that’s doubly important when we look at AI stuff. I mean, humans are fallible anyway. And especially when it comes to things like CSAM, where at least in the UK, there’s no qualification for aging. So it’s a best guess estimate on what the examiner at the time thinks, and that’s an interesting thing. And I’ve worked both sides of that divide and I’ve had to reject assessments of CSAM because the person in the picture, whilst they looked young, was wearing a wedding ring, had tattoos and was clearly over 18 if you actually took 10 seconds to evaluate the picture.

So, you know, the idea that an application would be able to leverage the degree of detail that’s required, does bother me that it’s a…I don’t know (and I apologize and I’m not going to bad mouth Cellebrite on this front because I honestly don’t know), but I’ve seen vendors not fully disclosed that it’s only a sort of probability scale versus a certainty and the salespeople tend to sell it much more as “we can find every image that you’ve ever, you know…and it’ll solve all of your problems”, versus “there’s a chance that we’ll find probably 90% of the stuff that you’re looking for and 70% of that will be wrong.

Heather: And it’s true. And you know what, even if…so Cellebrite has this confidence scale, it’s like this big in the tool. I did a Tip Tuesday on that. I remember someone’s like, “oh, do you know, you can change the confidence level?” I’m like, “no kidding, really? I’ve done this for 21 years!” I’m like, “huh, look at that button”. But I am a person which…I have falsely believed that everyone out there is like me and will validate and make sure and follow the source. And I’m told, “no, that’s like 2-5% of digital forensics as a whole”. I’m like, “you got to be kidding me”. So people don’t even know that confidence button is there, so they’re going to change it. Like there could be makeup instead of cocaine and it’s like, “oh, okay. I could kind of see how that makes sense”. And then you change your confidence level and you’re like, “nevermind, everything just went away”.

Si: Yeah, exactly.

Heather: But no one knows how to do those things. That’s the issue too. So the tools put all these things in it and everyone presses buttons and then gets their answers. And I…oh my gosh. Once at a SANS class, I was showing Axiom and Cellebrite and everyone’s going through and using this, and the lady, she raised her hand, she’s like, “I appreciate your time this week, Heather, but I have no time to do all these things. So I’m just going to dump my phone and print out a report”. And I was like, “I hope I never am under an investigation that you work”. Like, please don’t ever investigate me! It’s scary.

Si: It is. And it’s quite a mark of how poorly funded some law enforcement departments are that they are just feeling so much pressure to turn cases around that there’s so little care in here.

Heather: Yep. And I don’t know about you guys, but if you meet people and you’re like, how did you end up doing digital forensics? “Oh!” They always have…it’s like me, I’m like, “oh, I went…this was a stepping stone”. But it’s usually someone they’re like, “oh, you know how to work a computer and send email? Good. You’re our computer forensic investigator. Come on over here”. It’s crazy. It’s almost like nobody wanted to do it.

Si: I had that conversation with someone last week. Literally last week, who was on a course with me who had been in the role 10 weeks and had just been sent there because he knew something about computers and, you know, it was…he was doing very well on the course, I have to say, he was fine, but, you know, it certainly wasn’t a choice. It wasn’t a life’s calling like it has been for some of us.

Desi: Si and I get a skew of people who didn’t want to be in it, and now they’re in it for life because they’re the ones that are like doing all the things and like get quite high up and they’re just like, “I can’t believe I got here. Like, I used to be an architect and now I’m, like, running a digital forensics program” or something like that. So, yeah, it’s interesting that we see the different skew in general. And I don’t…I work from home so I never leave. I never meet anyone else. Just my dogs.

Heather: I work from home too, but then I teach and do…it’s community events and…

Desi: All the conferences and stuff, and yeah.

Si: Yeah, it is a very interesting community. And actually I think whilst I occasionally despair at the lack of basic computer science knowledge some of my colleagues have, I do think that we’re a stronger industry for the fact that we have such a disparate base. I think it does bring new and novel thinking in. And I think it does make a difference that there is such a broad set. Especially where we’re drawing from experienced investigators. I think that’s an important thing. Throw a bunch of computer geeks in the room and they will tell you every fascinating and interesting detail about the way the file system has been stored and the file format, but they will actually forget to do the investigation part. So it is quite good to have some colleagues that remember what we’re actually here for occasionally.

Heather: It’s true. And honestly, like with AI and social media and all the things, the younger generation straight out of university, what they do is totally different than I guarantee what we do on our phones and how we post on social media. So I think even having that knowledge of what is possible and then have us seasoned people that are like the grouchy old people that want to dig in and figure out all the things, I think it’s a perfect blend for organizations for sure.

Desi: Yeah. And I saw a video the other day. This was not to do with CSAM, but it was generating essays for school or university, but then AI programs were used to pick it up out of ChatGPT, so there was this other AI program that you could use to humanize it, which then got around this other AI thing, and I was just like…but that’s that next generation, like, kids, if they want to use it, they will figure out how to use it and get around it, and a portion of that society, unfortunately, will turn into the more…the bigger figureheads in this space of creating CSAM and avoiding detection.

Si: It’s next level, isn’t it?

Desi: It’s just been interesting.

Si: My son, if he ever listens to this podcast, will complain about me outing him for this, but in his era he just changed the Wikipedia entry to read what it was that he wanted it to say so that he could win the argument. And with me, I hasten to add! He was like, “dad, let’s check it on Wikipedia. Oh, look, it says what I’m saying it says”. Little sodd. Yeah. So yeah, they are going to figure out ways of using technology that we’ve never even dreamt of. But AI I think is…I think the thing is, is that for me, AI, the scary part of it is that nobody really understands how it works. Not in the sense that…I mean, the people who write it, they understand that it’s taking a machine learning algorithm and it’s pulling in a load of data and a large learning set, and it’s then constructing something from it, but they don’t really understand how it works. You know, open white box AI, where you can see how it’s come to it, is rare. It does exist and it is possible, but it’s very rare.

So, you know, for these scenarios, it’s very, very difficult to understand what’s going on and what the implications of running it through three different AIs actually is and where those intersections are going to be. And we’ve seen that problem with the inherent racism and sexism we see in AI generation, especially image generation. You know, you put in a request…it’s gotten better, but you used to put in a request for a role, if you put in a nurse, you would end up with a white woman. If you put in a doctor, you would end up with a white man. It’s horribly biased, or it was horribly biased. I know that it’s been sort of…but it’s fundamentally on the basis of the training sets. So, careful choice of training set is so important.

Desi: What’s next on, I guess, your roadmap of this career that you didn’t want, where are you going to next? Research?

Heather: So honestly, if I could do anything, if you’re like, “Heather, what do you want to do?” And it doesn’t matter. I love doing this stuff, like right here, doing research, talking to people about it, trying to stop the ignorance in DFIR and educate people on how to do things quickly, because people don’t have time and they don’t have time to properly train, they don’t have time to validate. So just education in general. But I love the research.

I want to keep doing the research, but based upon conversations like this. I love reaching out to the community, talking, researching, and I still love working the hard cases when people get stuck and they’re like, “help us solve it”. That’s what I want to keep doing. So I hope that’s what’s next. What I don’t want to be next is just straight into, like, manage smart people that get to do all the fun stuff. I don’t want that. I like to be a worker bee. I like to do the work.

Desi: Yes. It’s terrible when you watch smart people working and you’re like, “I just want to join in”.

Heather: Yep. Apparently Cellebrite uses our own AI learning. So I think it’s the same thing as machine learning. The more you feed it, the smarter it gets.

Si: Fair enough. You just have to figure out what you’re feeding it now.

Heather: I know. “What are we feeding this beast? Do tell.”

Si: I think this has been absolutely fascinating and I hope that you will join us again in future to talk about where your next piece of research goes.

Heather: I will.

Si: Because first of all, you’re an excellent guest and it’s wonderful to talk to you. And you’ve made this a very easy and pleasant hour that’s zipped by. But what you’re doing is really important. And I think breaking the cover on what appears to be a black box of AI, is the only way we’re really going to make it a tool for us to use and leverage as opposed to a tool that owns us. You don’t want to go down the Terminator scenario too much, but I think the less we understand about it, the higher the risk, we will be, you know, all murdered by a T1000 or something. So, I think what you’re doing is really important and I’m really glad that you’re sharing it with the community, the way you are, on Twitter and in your outreach stuff. So thank you very much for joining us.

Heather: Thank you. And thank you for letting me be so candid.

Si: No, no, it’s a pleasure. It’s what we’re after!

Desi: Yeah, no worries. It’s been great. Like, I’ve learned a whole bunch and lots of links for our listeners to go and hopefully educate themselves and just start that conversation with their friends as well. Because I guess that word of mouth will help. So, really appreciate all that you shared with us today.

Heather: Thank you for having me.

Si: A pleasure.

Desi: Awesome. Well, everyone for listening: thanks for being with us this week. If you’re watching it, you can get this on our website, forensicfocus.com. We’re also on YouTube or any of your favorite podcasting apps if you just want to listen to us on the go, but we’ve had a really great time today and we’ll catch you all next time. See ya.

Si: Cheers. Thank you.

1 thought on “Combating The Rise Of AI-Generated Child Exploitation Material With Heather Barnhart”

  1. What’s critical is to fight fire with fire. Use Ai to detect such content on social media and other platforms, quickly take them down and backtrack the source. META and other platforms are very passive about this ongoing and growing issue. Many are reporting very disturbing content poping up out of leftfield. Many cases where there is no way to report it. Something very odd is going on, and these platforms don’t seem to actually care or do anything about it.

Leave a Comment