FF: Lana, tell us about your background and how you became Director of Operations at the Clinic To End Tech Abuse.
My professional training was in traditional computer science (doctoral work in distributed systems at UCLA and computer science undergraduate at Columbia). I was always very interested in the logical frameworks that allow us to reason about complex systems and make safety guarantees about program behavior.
Separately, I was a peer advocate and consent educator for sexual violence response as an undergraduate. As a graduate student, I actually began by volunteering with the Clinic to End Tech Abuse. I was lucky enough to receive a Computing Innovations Fellowship to join Cornell Tech as a postdoc to explore the intersection of these two paths, and very quickly fell into the role.
FF: What does the Clinic to End Tech Abuse do?
We provide direct services to survivors of intimate partner and domestic violence who are experiencing technology-facilitated abuse, so think situations where an abusive partner seems to always know where a survivor is or what messages or emails they’re sending.
A separate but really important component of the clinic is that we thoughtfully instrument our service in a way where we can, with informed consent, use as much of our experiences working with survivors to achieve broader impact through academic research, policy, or giving feedback directly to technology platforms.
FF: How is technology being used for stalking and abuse, and what measures can be taken to mitigate these risks?
So the answer to this is both very simple and also very complicated! The simple answer is that many abusers are repurposing pretty unsophisticated means: sneakily adding themselves to Find My Device technology, using cheap Bluetooth or GPS trackers, knowing the answers to security questions, or using their status as a shared owner of a phone contract to take over a phone number or email account. We call this dual use, where abusers use technology that has a legitimate purpose or their rightful claim as a shared owner of a device, and subvert it for abuse.
From a technical perspective, that is really interesting. At an infrastructural level, abuse might look identical to legitimate use. So we need to put a lot of thought and creativity into architecting technology to be resistant to abuse.
FF: With rapid technological advancements, how do you see patterns of tech abuse evolving?
Overall, I think we can confidently say that all technology can be abused and that digital abuse often mirrors analog abuse. But we’ve also seen that technology has a peculiar capacity to extend the reach of abusers over space and time, to evade accountability afforded by historical protections like restraining orders, and to entrench existing iniquities in the resources available to the most marginalized survivors.
Advancements in AI are poised to continue this trend, with tools like voice emulators being used similarly to phishing identity theft attacks and deepfakes being used for extortion in a similar way to non-consensual intimate imagery (“revenge porn”). Yet these newer capacities for abuse are difficult to trace back to a source and create more egregious harm for survivors who don’t have access to, for example, appropriate legal representation.
FF: How can first responders within the legal system improve the response to technology abuse?
Staying on top of this constantly evolving threat is incredibly difficult, and the number one need is for investment in training and continuing education as a field. The good news is that research shows that abusers are often not using very sophisticated technology. What’s more, they often over exaggerate their abilities as an abuse tactic. That means equipping first responders with some very simple strategies to secure an email address or phone and to document tech abuse can be very effective—which is great, because first responders are often already overstretched.
FF: How can tech companies do more to prevent their platforms and products from being misused?
First and foremost, this would ideally include funding research and services that are already working to counter abuse. Second, as I’ve said, we often can’t distinguish abuse from use at the platform level. So there is an abuse detection problem (content moderation), but also an anti-abuse design problem, in which tech companies can essentially red-team their own platforms and products and then design mitigation or recovery strategies based on what they’ve learned. In either case, engineering needs to take accountability for abusive uses of their products and embrace trauma-informed and abuse-resistant design.
FF: Is there a need for more technology abuse clinics? Tell us about the Technology Abuse Clinic Toolkit and how it can help in setting up a new clinic.
Yes, definitely! We get requests from survivors outside of our service area that we need to turn down due to capacity. So there’s definitely demand. It can be intimidating or nerve-wracking for a technology expert to sit down with survivors who are in crisis or in a very vulnerable moment in their life.
The toolkit was our way to share our experience; what challenges to expect, what training to seek out, and how to structure a clinic to build in as many safeguards as possible to make that a little less scary.
FF: And finally, what do you enjoy in your spare time?
Working in tech abuse can be mentally and emotionally draining, so I love the most vapid, no-brainer activities to recharge! For me that’s weight lifting, vegan brunch, telling my cat that she’s just a little baby, video games, and deep dives into trash reality television (Vanderpump Rules).