- Welcome to the Cyber5, where security experts and leaders answer five burning questions on one hot topic and actual intelligence enterprise. Topics include adversary research and attribution, digital executive protection, supply chain risks, brand reputation and protection, disinformation, and cyber threat intelligence. I'm your host Landon Winkelvoss, co-founder of NISOS, the managed intelligence company. In this episode, I talk with Trust and Safety Professional Association Executive Director, Charlotte Willner. We navigate the world of Trust and Safety with a focus on moderating the abuse of user-generated content and fraud and technology companies that produce application, platforms, and marketplaces. We also talk about the nuances of Trust and Safety within the larger security apparatus of enterprises and how security professionals can explore a career in Trust and Safety. Stay with us. Charlotte, welcome to the show. Would you mind providing a little bit about your background for our listeners, please? - Thank you for having me on. My name is Charlotte Willner. I'm the executive director of the Trust and Safety Professional Association. The association is a place where we are building a shared community of practice across all of the people who develop and enforce principles and policies about what is acceptable online. So a trust a safety professional can be a content moderator. It can be a policy writer. It can be a tools engineer. There's a lot of different ways you can participate in Trust and Safety as a professional. And we are a space for all of that. What we do is create events and spaces for people to meet and connect with their peers across our industry globally, we facilitate training. We do knowledge sharing between those professionals and we work to provide career development services and resources tailored to a lot of the, just, very unique challenges that this profession presents to people. - This is fantastic. And I've been wanting to have a conversation like this for a long time. 'Cause I think it's not every day you can talk to somebody you know in terms of navigating the adaptive waters of Trust and Safety with the technology, boom that's happened over the past 20 years. Certainly a lot of those there's just different threats that you might see in a traditional security operations team that which you all know, which are well familiar. And so looking forward to kind of just diving right in. So I guess first question is just that, provide the overall theme of what trust and safety means for an enterprise. - I would say that what we call Trust and Safety today emerged from two very different work centers within technology companies. They would have been security and actually customer support. Actually, when you first approached me about being on a fraud oriented podcast, I was like, oh no, what if I don't know enough about fraud? And you know, I'm executive director of TSP. I should not be a fraud expert. We have fraud teams who are members, right? But the reality is that in many companies, particularly in the early days, you found two very separate swim lanes across what we would now consider the Trust and Safety landscape. There are the folks who dealt with the money and the folks who dealt with the end users and the money people were typically out of security teams. That's where the fraud teams often lived as they appeared onto the landscape. That's who we usually ended up talking to the FBI, all that stuff. The user people often were for customer support divisions who increasingly were dealing with weird stuff, right? Maybe it's very angry people, maybe it's stalkers maybe it's people with their bits out and they shouldn't be out. You know, not typically what we would consider when we look at the fraud landscape. But as those two swim lanes kind of grew and extended, we noticed that there was a lot of overlap between these fields and a lot of interchange between these fields, you know, on, for example sort of the user support side, you might get to a point where you need to involve law enforcement, right? Or you need to report something to your physical security division because it's a threat to the company itself. Something like that. Similarly, you know, there are a lot of folks who work in fraud who will notice something weird going on and it's not about a credit card. It's not technically their department but it's something that maybe someone ought to do something about, maybe it's drugs, maybe it's sexual exploitation, whatever it is. And that tended to fall into more of the sort of the abuse side. So what we've seen really in the last 20-15 years is I don't want to say convergence because they're not the same but we're starting to see a lot of increased collaboration across what we would now term very broadly at the Trust and Safety field. - That's certainly helpful. So when you think about platforms and marketplaces which is, you know, pretty traditional with a lot of trust and safety teams within broader technology companies let's start with the money side. So I mean, you know, fraud has been around for numerous businesses and industries for years. Is it reasonable to define fraud as attacks against the platform? And then abuse is defined as using the platform to conduct attacks? That's how our customers outside of technology I think would probably generally define it really. Is that your thoughts there or is it really more nuance than that? - Yeah, that's a starting point absolutely. But I do think there is nuance there. You know, there are of course other attacks against platforms that we wouldn't consider fraud like DDOS, mass reporting campaigns, and brigading perhaps could rise to the level of an attack against a platform. And when you're thinking about using the platform to conduct attacks bad actors can use the platform to defraud on the individual level. I mean, this is actually what a lot of trust and safety teams deal with, particularly in what we refer to as the meet space that we really need to get a better word for that. But it's online marketplaces that deal with real world spaces and objects. So for example, something like Airbnb. Like fraud could happen on that platform where an individual bad actor, misrepresents listings, tries to take it off platform, things like that. That's all what they would consider fraud even though that is not a large-scale attack against the platform. You think we'll think about something like the 419 scam. I think we would all broadly interpret that as a fraud but it's not an attacking the email platforms, it's an attack against individuals using email. And so I think there is some nuance to that definition that is worthwhile unpacking as you're talking about fraud versus abuse. - That's certainly helpful. And then I guess on the end-user side of that what are the Trust and Safety challenges between fraud like you kind of just described and user generated content. Where does that come into play? - This is a good question because it's about the nature of truth, User generated content can often be employed in a fraud or a fraud action knowingly or unknowingly from the person who generated that content. You know, you see that in a pretty common scam that goes around on Facebook in particular, is people like stealing pictures of military service members and then saying, Oh I'm so and so, and I have such and such a sad story. Why don't you connect with me online? Oops, PS, I need money. That's a fraud that's propped up by user generated content. And to this question of like disinformation I think it's helpful to look at information as a spectrum and disinformation and misinformation are sort of in different places there. Misinformation typically we would say is bad information that's out there but there's not malicious intent behind it. Disinformation, we would say is bad information that someone is intentionally putting out into the ecosystem to affect a particular outcome. And the challenge we have with user-generated content is so much of it is misinformation. And it's not to the level of vaccines work or they don't work. It can be as simple as like, Hey, this is a great recipe. And it turns out it's a terrible recipe. But it travels around the internet is like, Oh, almost best apple cake. And everyone wastes their time making this awful cake. That's misinformation. It's not harmful you maybe a waste of money on supplies and your waste your time and your kids hate it, okay. But there's all kinds of that type of information that's put out where maybe there's no agreement on whether this is a good apple cake or not. Maybe it's being reproduced because someone thinks they're trying to be helpful or is trying to be helpful and it turns out they're not but there's not that intent behind it. And you apply that to a lot of other we talked about the apple cake, but there's a lot of other circumstances where that's more menacing, right? That can be dangerous cleaning tips that can be essential oils will cure your Ebola, that can be political opinions and medical opinions that end up having quite a potential negative impact. But because often your teams will never know the intention behind a statement or a piece of content, it's really difficult to figure out what is the appropriate punishment, right? What is the appropriate response? How do you handle the fact that this exists on your platform and how do you perm sort of its production? Those ended up being really challenging questions. I think in both the fraud and the user generated content space. - That's a perfect segue really in our next question. You know, I think Trust and Safety teams are nuanced because they they're usually technology companies and there's different chains of command, outside the security operations team. What are some overlaps that you're seeing between where companies, from traditional enterprises, so let's take retail, financial, manufacturing, what are some overlaps between how they tackle problems and really, you know, where Trust and Safety playbooks? What are some playbooks that a bank could take away from frankly like Airbnb, like you said, in terms of like how they dissect them and ultimately add value. - It's a really interesting question because I think Trust and Safety is fundamentally risk mitigation and incident response. You know, it's a lot of the same principles. It's just that it's sort of compressed and amplified. It's thousands of incidents a day and the scale is always gonna be larger for Trust and Safety, especially on the abuse side even compared to something like the fraud side, just when the barrier to entry is zero and free services are free, right? Trust agencies are necessarily going to be having to handle a huge number of fairly unique incidents every day. And so I think certainly there's a lot that slower and slower pace enterprises can learn from Trust and Safety playbooks. A lot of the principles are really similar. You're evaluating the quality of your inputs. How do you know what you know? Is the source of that knowledge sound or are there caveats and how do you deal with those caveats? What are your business principles? And for tech companies in particular, this goes back to what we would call the company values. What are the things you say as a corporate entity are important to you and how are you living that out in the way that you enforce your rules? Are you sure that your rules are actually in line with your values? And being sure to constantly be checking for that and making sure that you're really confident in that because that's where you can otherwise start to see a lot of disjointedness grow. You know, we historically in Trust and Safety have tried to learn a lot from the other side as well from traditional enterprises because certainly when I got into the field I had worked retail and nothing else. And so a lot of the lessons that I was bringing to grappling with online abuse and the online landscape came back to what I had experienced witnessed as a frontline retail worker. And you know, how people like to be treated, what will make customers mad when you're dealing with theft? How do you manage that in a way that helps people feel safe, but also deters theft going forward using a racial justice lens, I think is really important. And something that tech companies I think have really had a chance to lead the way on not always successfully, I would add but a lot of tech companies, especially in the last few years have really turned their attention to how do the practices that we have as a trusted safety team impact underrepresented groups right within our user population. Is there a disparate impact? How do we ensure that in setting our rules and doing sort of our moderation we're not perpetuating existing racial disparities. They're coming to us from things like the justice system and from sort of societal misconceptions and stereotypes. That's a big part of what we do. And that's something that we've had to do because of the nature of our work. But I think certainly would be something that people who are paying attention from these other sectors could really learn a lot from. - That's absolutely fascinating. And I'm just kind of curious if you talked to any cybersecurity expert they can tell you how many attacks they stopped. I think when you talk to physical security folks probably can even talk to Trust and Safety folks. I'm kind of curious how you've put metrics in place that shows success in the past. - Yeah, to me, their prevalence is really the gold standard prevalence metric. You know, being able to look at something and say, okay how much of this do we actually have out there? Let's measure it. Let's go find it and see how much we have. That really helps you know, what you're working with. It helps you track the impact of the discreet efforts a team engages in. You can often zoom in and say, Oh this is where we tried this. And it really works. This is where we tried that and it didn't. And you know, what do we learn from that? There are a few companies that actually publish this sort of data. Facebook, I think, measures this and puts it in some of their transparency reporting, YouTube, I believe now publishes like a violet of view counts so you can see how many times something was viewed before it was taken down, because it was a violation of service and those are great metrics to have. They tend to be phenomenally expensive. So this type of measurement is often fairly off limits to sort of small and medium-sized companies you need a certain percentage of abuse for it to even work. You know, if your abuse rate is literally one in a million that's a lot harder to sift for it. You have to look at a lot of casseroles before you find the swastika. So I always recommend prevalence, but I recommend it with a caveat, which is just because you may not be able to scale wise do a statistically significant prevalence metric. That doesn't mean you can't approximate one. So you absolutely can approximate prevalence measurements in ways that are useful even if they're not scientific. An example of this is in my early days with one of my employers, it was very small then, and we didn't have, you know, we just, for a variety of reasons we're gonna be able to do prevalence. So the proxy that we set up is there are a few types of abuse that I in particular had a lot of expertise on. And the measurement was how fast Charlotte find this content, if just set loose on the site in search. And that was the metric we were able to use not for like the weekly check-in and sending that to the board but it was useful because when I can find something in five seconds and then we worked on it and I could only then it turned out that I needed a minute and a half to find it, that actually is, that's a piece of data you can use. It's not the only one you can use. It's not one you should like send off and get your Nobel prize on. But it is a way you can measure. And I think, especially for smaller outlets who care about this stuff, but have those size limitations or those resource limitations that doesn't mean don't try, right? Like it's okay to get a little creative with how you look at your success because what matters is you're measuring the same thing over time and you're putting appropriate caveats with it but you're paying attention to it in a way that is measured at all. - How do you think about a career in Trust and Safety? I don't think that there's a college degree by any means in Trust and Safety. Like there is across different elements of security. If you're wanting to dabble in the security space, how do you kind of go about that? And what did you look for when you were recruiting? - This is a question that the answer has changed a lot over the last few years. I think when you're recruiting certainly as a hiring manager what I was looking for for people coming into Trust and Safety was a sense of curiosity, a sense of wanting to do the right thing and heaps of adaptability, like really being able to be flexible and understand that every day is going to be different. We're not gonna have all the answers and you're gonna have to roll with that. And I think people with that combination of attributes ends up doing really well in this field, as you said is not a college degree for this. There's no step-by-step playbook for joining Trust and Safety as a profession. Now with TSPA, we're gonna be putting out a Trust and Safety one on one curriculum. And our first unit is actually launching this month which is coming to a close pretty quickly here. And, you know we're trying to help people kind of get that step in but it's almost, I don't know how successfully someone even could put together a college degree on Trust and Safety because the field changes so quickly. That's something that everyone sort of looking at it needs to understand coming in about their career path is that there is not a set path that has been true because it wasn't like a field and now it is great. But even as the field continues to grow your career path will likely not look like anybody else's because the field is so dynamic. So folks often start sort of at the entry level as content moderators, as sort of Jack or Jill of all trades at a startup who happened to do a little trust and safety. People coming in through customer support, people coming in, maybe through fraud because they got the fraud operations team. Maybe they got a degree in something mathematical and that's sort of what that their hiring team decided. And once you're on the fraud team, you realize, Oh, you know what? I'm really interested in terrorism. Which of course, there's a fraud angle there too. You can stay on the fraud team. But you know, I think that once you're in, at that level, you have a lot of choices about how you move around but it's still very much has to be driven by you and your interests because there is not an established like, Oh yes, you do two years in this role and then you're qualified for this role. And the best advice I can really give on people who are looking to enter the field or are just entering is just get in there, learn everything you can from the position that you're in, work to understand what parts of it are really motivating to you are really interesting to you because it's gonna be that interest that continues to pull you through your career in Trust and Safety. And that may mean, hooray, you're the head of Trust and Safety somewhere. That may mean you're a really specialized investigator. That may mean you do you know, you spin off into tool development. There's all kinds of things and needs that the space has that are really waiting to be filled. But no one's gonna tell you this is where you go, go do it. Our goal is to actually be doing an event a week which is like not where we are too. I know I say that and people are like, whoa! But what that really is, is okay, we're talking, like four events a month and that not big scale. Some of those are going to be like, Hey, everybody who works on harassment, like get together and have yourself a drink. Here you go. It's about bringing the community together. And which of course, at this moment is a little difficult to do mid pandemic but we can do some of that virtually. And we're really looking forward to doing a lot more of that physically too, because it's about more than just like, Oh, I sat down and got a lecture about Trust and Safety. It's about building those bonds of trust within the community and bringing more and more people into that circle to understand like who's here. Who can I learn from? You know, what can I contribute into the space and how do we keep passing that on? - No, I absolutely love that and love what you're doing. Congratulations on the position. You have a great mission and appreciate your time. For the latest subject matter expertise around managed intelligence. Please visit us at www.nisos.com. There we feature all the latest content from NISOS experts on solutions ranging from supply chain risk, adversary research and attribution, digital executive protection, merger and acquisition diligence, brand protection and disinformation, as well as cyber threat intelligence. A special thank you to all NISOS teammates who engage with our clients to conduct some of the world's most challenging security problems on the digital plane and conduct high stakes security investigations. Without the value of the team provides day in, day out this podcast would not be possible. Thank you for listening.