- [Landon] Welcome to the Cyber5, where security experts and leaders answer five burning questions on one hot topic and actual intelligence enterprise. Topics include adversary research and attribution, digital executive protection, supply chain risk, brand reputation and protection, disinformation, and cyber threat intelligence. I'm your host Landon Winkelvoss Co-founder of Nisos, a Managed Intelligence™ Company. In this episode, I talked to the Director of Technology at the Global Internet Forum to Counter Terrorism, Tom Thorley. We discussed the mission of GIF-CT and how it's evolved over the last five years with particular interest on terrorist messaging across different social media platforms. We also discussed the technical approaches to countering terrorism between platforms and how the organization accounts for human rights, while conducting their mission. Stay with us. Tom, welcome to the show sir, would you mind sharing a little of your background for our listeners, please? - Sure, and Thank you for having me. I'm Tom Thorley, I'm the Director of Technology at GIF-CT and we're a not-for-profit organization, who specialize in helping prevent terrorists and violent extremists from exploiting digital platforms. And I've been working in counter terrorism for about 15 years now. I've been at the UK government for most of that, working at the kind of intersection of counter terrorism and data science, and then I moved over to GIF-CT in January and have been leading our technology work here for the last 10 months. - There are certainly fewer organizations that live and breathe counter terrorism comparatively toward nation-state cyber security efforts but nevertheless, it's certainly an important mission, certainly has ramifications from a lot of perspectives and we certainly, the world combated terrorism in a lot of different ways and still combats in a lot of different ways. I think that we'll kind of get into the evolution, but I guess at a very granular level, I'm kinda curious, providing an overview of the mission of GIF-CT and some of the successes in the past five years. And then I'm also kinda curious how the missions evolved as well. - Yeah, sure. And you make a really good point, which is the, as the kind of counter-terrorism landscape evolves the blurring between these various different harm types online that, you know, cybersecurity, counter terrorism, misunderstood information, and the interactions between those fields is definitely changing. GIF-CT's particular mission is to prevent terrorists and violent extremists from exploiting digital platforms. So every day we're working to shrink the spaces online that are available to those bad actors to operate. At GIF-CT, we really see as where the conversation around this particular intersection between technology and terrorism is happening. We're bringing together key stakeholders from industry, from government, from civil society and from academia to foster a kind of an environment of collaboration, and information sharing around counter terrorism and violent activities. Our evolution over the last five years is kind of iterative. We started in 2017 and we were funded initially by Facebook, Microsoft, Twitter and YouTube. After that, additional companies joined and became a consortium each having their own staff available to some degree to help with the mission. But, following the Christchurch shootings in March of 2019, it was really clear the mission had evolved and the live streaming of that horrific violence really made it clear that we needed to move beyond ad hoc support for this mission and have a really full time dedicated team of experts working to fulfill the mission of GIF-CT. So, in 2019, we announced at the UN General Assembly that we would become an independent organization. And that actually happened in December of 2019. We became a fully established 501c3 registered in the US. And then, throughout 2020, we were obviously slowed a little through COVID, but we brought in our first Chief Executive Officer, which is Nick Rasmussen, who was the former Director of the National Counter Terrorism Center in the US, and then our other staff, which includes myself as Director of Technology and Dr. Erin Saltman as Director of Programming, along with a handful of other subject matter experts in counter terrorism and technology. We now have 17 member companies, so that 17 different technology companies who work with us, they have to go through a kind of period of mentorship to make sure they meet specific criteria, which we can go into a little later if you like, but really we're working with those 17 companies to prevent further exploitation of their platforms, to strengthen how companies respond to terrorist and mass violence attacks. As well as providing research workshops and technology resources, so that members can constantly learn about new evolutions in the threat landscape and approaches to combating them. - I'm curious to kinda dive down this rabbit hole a little bit. So, if I'm thinking about the world of counter terrorism between 2001 and 2015, obviously with Al-Qaeda being the primary adversary, you think about Inspire, you think about their leaflets, so to speak, that come out online, very old school type of approach in terms of getting a message out. Thinking about how ISIS, the Islamic state, runs their messaging platforms. I mean, it is a full-blown marketing campaign, just like a marketing team works in a regular company. And you have technologies that ultimately produce a message disseminate that across all major platforms, really at the click of a button. And you can do that from an automated perspective, you know, where you're getting your messaging out to the millions in the matter of seconds. So they took it to a whole nother level. And now, ultimately you have ISIS certainly degraded to some degree over the past couple of years. And of course now there's new extremist elements that are evolving there. I guess my question, is that, first of, is that a fair assessment? And I guess, secondly, you know, how has that change in landscape brought a change to how you guys are doing business? - I think your characterization is excellent. And I know you have your background in counter terrorism so you're well-informed in this area. I think the way to think about it, for your listeners, is that, actually, terrorists are like you and I, right? They use the internet, they use the tools in the same way as you and I, they have different aims and objectives obviously, and a few minor differences in what they do, but broadly they use it in the same way. So if you look at how the internet evolved over that time, we went from a kind of old school web 1.0, where it's basically broadcasting information out, which is where our AQ were, you know, the old school publications they put out, through a kind of much more modern media apparatus that I still had. And now it's a really, we're talking about how the web is fragmenting, and how the online spaces are fragmenting and how communities are fragmenting, and the terrorist of today is really a digital native operating on multiple different platforms, and moving seamlessly between them and using those different platforms for different functions that those platforms are designed for. So they're really taking advantage of the modern internet in the same way that you and I do every day, and they have multiple apps on their phones and multiple devices, and it makes the challenge very much more complex in combating it. And the only way we can really do that is by having a multi-stakeholder approach. And what I mean by that is, I need the tech companies to be playing that part, and that's what GIF-CT is set up to do, but I also need the governments around the world to be helping support that and working with us on that. I need a civil society to be providing a sensible ear, to make sure that we're not straying into problems around content moderation and freedom of expression or privacy issues or security issues, and making sure we're balancing human rights across those things. And I need academia to be telling me that this is how the technology landscape is changing. This is how the terrorist use of that, the internet, is changing. This how terrorism is evolving in general. I need all of those people to be coming and doing their small part, to be able to get after this because as one of us on our own is never going to be able to really make an impact on this mission. - I think when any nation state and then it's weaving in human rights here, a lot of nation states, human rights abusers, you know, human rights abusers especially, think of countering messaging as having some type of software that's scanning remote code execution on individual devices. That's not gonna fly, certainly in Western societies, it's not gonna fly, in any kind of private sector business in Western society. It's not gonna fly in any type of non profit in Western society. So when you think about countering that messaging, what are the definitions that you use to combat across the platforms, and are you seeing it transition almost away from specific groups to broader movements? - Yeah, absolutely. And I think the first thing to say is that, there's no one agreed-upon definition of terrorism, right? If you talk to five different counter terrorism experts, you'll likely get five different answers. So at the beginning of 2021, we launched an effort to draw on a wide array of relevant experts to consider best how GIF-CT could expand the taxonomy of terrorism that we use beyond our initial standing point. And that initial start is really the UN Security Council's consolidated sanctions list. So, that's a very useful list, in that, it is a kind of basic agreed upon sets of groups across all of the United Nations, and there's some rigor goes into making sure that the right groups end up on this list. The challenge with it is, because it's agreed across the UN, and because terrorism is very very quickly these days, the list is biased by the history of counter terrorism, and so has a very strong, robust set of groups on the kind of Jihadi side, be it some of the more emergent threat areas, the list is not necessarily as robust, and it struggles to keep up pace with the emerging landscape and the changing landscape of terrorism around the world. And so, those biases cause problems when it comes to content moderation. So how do we move beyond that? Well, that's a big challenge. The experts that we brought together really thought about that in two ways, one is, the KGO expands the list to be looking at other lists, and incorporating lists from other governments, or do you look at behavioral indicators? And we have tried to do a little bit of both of those things. The first step that we took after the shootings in Christchurch was, we built out what I would refer to as an incident-response framework. And that allows us, when there's a crisis going on in the world, to look at a set of behavioral characteristics and to use our technology called hash sharing which I can go into in more detail in a second to capture patches of content relating to that particular crisis. So, we've done that a couple of times with Halle, Germany, the attacks there, and with Glendale, Arizona, and those allow us to get beyond the designation lists and look at a specific event. And the second way we are thinking about that is looking at the expansions that we're about to make, which is a threefold. We're looking at the manifestos from terrorists, who just carried out an attack, and those are normally in PDF form, although there's some other forms out there, some people have done videos or just general writings, and then terrorist publications, which have specific branding, you know, so think of things like the Inspire Magazine that you mentioned at the start, like, ISIL obviously have their examples of magazines like Ramia, Dabiq, and now really, El Nabo is the focus there. And then, on the right wing side as well, there are a whole range of zines to include things like Siege and Iron March and publications like that. So, there's a whole range of different things that we can capture in that sphere. And then the final one that we're looking at is URLs identified via partner attack against terrorism, where specific content exists on that set of platforms. And that allows us to go beyond what we're doing right now with their images and videos, and take a more look at a behavioral approach to identifying where content is being shared. Not just on our members platforms, but more broadly on the internet. - Let's kind of pick that apart a little bit. When I hear hash sharing, I mean, think about the traditional cyber security sense, right? You take an IP address from a domain, you make a hash and it's called an indicator of compromise. You then see if that's been anywhere in your environment, and if it has, you know there's potential compromise within your environment. But of course, if you change that IP address just for the listeners background, I mean, a hacker can change his IP address and domain, it's the click of a button within seconds, and therefore it's gonna generate a new hash. So, you know, it's problematic to say that IOCs are gonna work against a sophisticated hacker, is it something that's very similar within hash sharing across social media platforms? So, if a message goes out, right? And like a single word is changed, to be in between platforms, is that gonna be problematic? Are there technical ways to kind of combat that? And I guess, more broadly, I guess, what are the technical approaches used, between platforms like you say that, how that comes together in an automated fashion. - Yeah. That's an excellent way of thinking about it. There are some slight differences. So, where an IP address is generally tied to an individual device or a network gateway or something like that. Content is not necessarily associated with an individual in the same way. So what we're hashing at the moment, images and videos, that's the easiest stuff to talk about, conceptually, at least, and as you say, like if you change a pixel and you're making a kind of traditional cryptographic hash, you change that hash, and that makes it extremely difficult to track. We do have some ways around that. We use those traditional cryptographic hashes like MD5 hashes for a quick and dirty kind of check they have quite a lot of utility for content that's been reshared. But, we also use what we call perceptual hashes, which are hashes that are locality sensitive. So, if you were matching two different images, they're very, very similar images, they will have hashes that are mathematically close to each other. Whereas two very, very visually distinct hashes will have hashes that are mathematically distant from each other. And we have a number of different algorithms out there that help us with that. Microsoft have an algorithm called photo DNA. Facebook also open sourced an algorithm called PDQ, which is the one that's the most easiest for us to talk about because it's open source and so, it's available for kind of more public scrutiny. And those algorithms allow us to do this perceptual hashing, this matching of images that have been slightly altered, so that we can track them when they're being shared either on different platforms, in different formats or slightly cropped, slightly altered in terms of color gradient, or any kind of slight modification that's happening. So that's the kind of general approach to perceptual hashing. The other kind of advantage of hashing is that actually there's a level of anonymity that is provided as you hash it. So, by creating a hash, rather than sharing just raw content, firstly, from an engineering point of view, I'm not sending around images and videos, which is highly expensive from an engineering point of view. But secondly, I'm not sharing user data, I'm sharing an anonymized form. So, unless you have that content, it's very difficult to kind of reverse your hash and get back to the original content itself which helps from a security point of view and a privacy point of view, where it comes to the kind of users of these platforms. We're also, the other thing to point out about hash sharing is it really allows member companies to find those indicators or those signals on their own platforms. So, it allows members to find images and videos that are being shared on their systems. What it doesn't say is you have to do this particular thing with this particular image. We're very careful to make sure that each platform has their own independence, their own privacy policies, their own terms of service, their own community guidelines and their own philosophy as a platform, depending on what their use case is. And that means that, we can help them surface those bits of content, but ultimately it's the platforms that decide this item is definitely violating our particular terms of service and should be removed. This item is more in a gray space and maybe we want to de-rank it in search results or have some context provider around it, or maybe there's some other intervention that is required. And that's up to the platforms to tailor that, depending on that specific platform, their specific use cases, their specific needs. - [Landon] So, in addition to hash sharing, I'm kinda curious of the technical approaches to real-time comms during crisis, as well as this expansions into hashing technology. - [Tom] Yeah, absolutely. So, let me take the hashing technology first, we were just talking about that. Really there are two new pieces of technology required to do those three buckets. And as a reminder, those are terrorist manifestos related to a specific attack, terrorist publications with specific branding or association with a particular terrorist organization and URLs relating to terrorist content. For the manifestos and the terrorist publications, those are generally shared in the form of PDFs. And up to now, our hashing has been focused on images and videos, and we have some good, robust algorithms to deal with those. So, those PDFs require a different approach to hashing, and I'm sure as some of you are listeners who work in the cybersecurity field will know there's lots of approaches to hashing PDFs and in the forensics field, in particular, that's used for tracking kind of malicious documents and PDFs that may have malware in the backend. And that's really great for those use cases. Those technologies don't quite transfer into the counter terrorism space because, very often that malware is hidden in the kind of backend of the PDF and the backend is changed, but the frontend remains the same. So you wanna track all of that in a forensic setting. In a counter terrorism setting, actually what I want to do is focus on the content, focus on the frontend of it. I don't really care too much if the backend has been changed, because the thing that I'm tryna protect against is the influence that the content itself has. So, we've worked on validating a number of different text hashing algorithms that are out there to compliment our image and video hashing. And we've been exploring an algorithm called TLSH which we're working on integrating into our systems. And that really focuses on extracting the pages from PDFs, from extracting the texts from within those pages and then hashing that text. And we think we can get some really good rates of kind of low false positives on those hashes and to give some really good indicator that people are sharing substantial portions of those things like attacker manifestos. On the URLs, it's much more simple technology and it's back to the MD5 hashes that you described at the beginning around your IP addresses example. If a URL changes by one digit or by one parameter, it can send you to an entirely different place. So, that we're trying to keep simple. I wanna keep it very, very granular and exact matching. Which yeah, I'm gonna miss some things because there are some minor alterations to a URL that may not send you to the different place, but I'm gonna catch the majority of things without introducing a whole bunch of false positives. So we've gone from a much more simple approach on that side. You also asked about real-time communications. So, one of the other things that we do at GIFCT is, we have an incident-response framework, which helps bring together the tech companies and other stakeholders when there's a real world crisis. And if you think about the attack in Christchurch and the live streaming of that as a kind of key exemplar, really what we're interested in helping the response to is when there's a terrorist or violent extremist attack, and there's a key online aspect to it. Clearly GIFCT's role is on that online aspect and how we can help communicate around that. And so, one of the key things that we need to do, is make sure that situational awareness is shared across member companies, making sure that ultimately the signal and inform of hashes can be shared quickly and robustly around all of our member companies. And, we need to minimize any kind of potential for harm that can be caused with respect to human rights, because when we're in a crisis mode, that's when we need to be responding quickly, and that's also where we need to be most respectful of human rights, because, there's a huge array of different things happening online from the terrorists actually sharing content relating to the incident, which is obviously what we want to be trying to mitigate, but also, there's lots of journalists reporting, bystander footage, kind of legitimate public discourse and a newsworthiness that we need to be protecting and actually ensuring it is protected at a greater level for things like potential war crimes investigations or other kind of issues like that, that may come up in the future. So, it's a real balance between making sure that we've got the company responses very responsive and acutely aware of the situation and able to share those signals, but also making sure that we're mitigating any potential harm that we cause by doing that kind of robust response in the human rights space. - [Landon] That's absolutely fascinating, I could get in the technical weeds on this stuff all day and I've just, I have so many more questions, but everything holistically, I think, I'm kinda curious how do you account for human rights when conducting your mission? - [Tom] It's an extremely good question, and I think, it's best summarized by saying that, when Nick Rasmussen, our CEO came on, one of the initial commitments that he made, was to enhance transparency of how GIFCT operates. And that was really with a view to this human rights perspective. It was critical for our first year as an independent organization, that we really understood what our impact on human rights was. And so we commissioned a human rights impact assessment, which is available on our website, through an organization called BSR. And what they did is really look through all of the interventions that we put in place, whether it's the technical stuff that I work on, or the more programmatic research areas that my colleague Erin Saltman works on. So, to see what the impact on human rights in each of those areas were and make some recommendations about how we can improve, where we can change organizational structures, where we can implement more robust processes, to make sure that we're balancing human rights in the right way. And we've already been taking action on those recommendations. For example, we've changed our membership criteria to really enhance the Christmas with which we describe what a commitment to human rights for our member companies is. And we have also constructed a multi-stakeholder effort as I described to look at what our taxonomy is, so that we're addressing some of the biases that exist within that. Over the next 12 months, we've got a whole range of different things that we look at to diversify our stakeholders and our memberships, which will also help with that as well as to enhance that transparency work, which we're going through step by step in a very deliberate way to make sure that we're sharing a, useful information, both from a counter-terrorism perspective, but also from a human rights perspective and b, we're balancing that transparency with the need to ensure that we're not giving a kind of advantage to our adversaries because clearly terrorists are also watching what we're all doing. And just as adversaries in the cyberspace world are looking at what cybersecurity research is doing. And so we need to make sure that we're balancing those two in some ways competing things. Ultimately though, I like to think of it as optimizing for human rights? You know, why do we do counter terrorism in the first place? Well, to help people have, protect their right to life, liberty and the pursuit of happiness, right? I mean, that's ultimately what we're doing here, and if we're not enhancing people's security, we're not enhancing people's right to those liberties, then we're failing as an organization. And so, everything that we do is from a human rights perspective, and we need to be making sure that as we're helping enhance the rights on the one side, we're also balancing that, and we're not impinging on things like freedom of speech and privacy on the other. - [Landon] Well, I think there's probably just an element of, at the end of the day, kind of reducing violence, I have to assume as well, right? Everybody has the right to free speech, particularly in Western societies. It's what makes us great, but when you go to, you know, violence as another means, I think that that's probably something that's aligned in the sand, so to speak, I have to assume, right? - [Tom] Yeah, absolutely. And that's why GIFCTs focuses on violence extremism. Clearly, there's a whole range of different harms out there online, like from terrorism, violent extremism, extremism, and then you get into the conspiracy theory, disinformation, misinformation, which blurs oftentimes these days, unfortunately with political speech, and that whole area is extremely messy, but I think there is a space on the far end of that, where there's violence involved and where there's a threat to life involved, that it's much more clear about what we can and can't do in those spaces and getting that balance and working out where those lines are, is really a big challenge and continues to be, especially in today's kind of very complex information environment online. And that's something that we're working on literally every day. - [Landon] That's very well said. And, I guess I'm kinda curious from that perspective, what's next for GIFCT and how'd your team like to see the mission evolve? - [Tom] Yeah. I think diversity is the key thing here, and that's from two perspectives really. One is from our membership and the other is from the technical approaches that we're pursuing. In terms of membership, I want to see GIFCT grow in terms of the geography that we support, just because of how the internet has been over the last few decades. We have a big group of US-based companies. The internet is changing. There's lots of other places where there's a huge number of very important tech companies out there. And we really need to diversify, in terms of geography to make sure that we're reaching all of those sectors as well, all of those markets as well. I think the other part of that in terms of membership is, we've been focused traditionally on social media platforms. That's where the bread and butter is largely because of the issues that you described around isol, in the kind of mid 2015, 16 era. But, those are not the only online platforms that terrorist use clearly. And I really want to see us looking at a wider array of platforms. People like organizations that do, you know, financial services or gaming platforms or audio-based platforms. Look, those are companies that we really want to be working with and providing value to. I think on the technical side, I really want to be looking at how I can diversify the value I had back to members. Hash sharing is great, and if you're a company that focuses on user-generated content, that is a fantastic tool in your arsenal. But as you know, like coming from a security point of view, I don't want a security system that is one layer deep, I want multiple layers of security, each doing a partial job. And so I want to build those other layers of our technical offering into all our work and be able to make sure that I can service that wide array of different members that we're beginning to build, and whether they're companies that provide logistics services or transport services, or, you know, whatever it is, I want to be able to make sure that our technical approaches can add value to those companies. So that's where I would love to go next. And on top of that, as I've said, I really want to build out our transparency work, I would love to get to a point where we're having much more in-depth transparency and actually our transparency work, not only is really helping the kind of accountability from a human rights perspective, but also helping us get feedback from various different communities whether it's communities exposed to radicalization or communities who are victims of violent extremism, I wanna be able to get feedback from all of those communities multi-stakeholder way with governments and academia as well, so that we can do better and making more impact on our adversaries. - [Landon] Tom, love what you guys are doing at GIFCT. Can't thank you enough for joining the program. Thank you for your public service and appreciate joining the show. - [Advertiser] For the latest subject matter expertise for our amazing intelligence. Please visit us at www.nisos.com. There we feature all the latest content from Nisos experts on solutions ranging from supply chain risk, adversary research and attribution, digital executive protection, merger and acquisition diligence, brand protection and disinformation, as well as cyber-threat intelligence. A special thank you to all Nisos teammates who engage with our clients to conduct some of the world's most challenging security problems on the digital plane and conduct high state security investigations. Without the value the team provides day in, day out, this podcast would not be possible. Thank you for listening.