- Welcome to the CYBER5, where security experts and leaders answer five burning questions on one hot topic in actual intelligence enterprise. Topics include adversary research and attribution, digital executive protection, supply chain risks, brand reputation and protection, this information and cyber threat intelligence. I'm your host, Landon Winklevoss, co-founder of Nisos, managed intelligence company. In this episode, I talk with professor of forensic cyber psychology and former producer for CSI cyber on CBS television studios, Dr. Mary Aiken. We discussed cyber psychology and how human behavior changes online, including factors that lead to extremist ideology and possible technical solutions used by enterprise to alert on disturbing behavior, from employees or critical contractors. Stay with us. Mary, welcome to the show. We're talking today about cyber psychology, and how human behavior changes online. I think a lot of people would probably wonder what is cyber psychology? Do you mind giving us a brief overview? - Sure Landon and thanks for taking time to chat today. So I'm a cyber psychologist. What is cyber psychology? It's the study of the impact of technology on human behavior. It's an advanced discipline within applied psychology. So my specialist area is forensic cyber psychology, I am a professor at the University of East London in forensics cyber psychology and forensic cyber psychology is the study of criminal, deviant and abnormal behavior online and unfortunately I'm kept pretty busy. - I'm just curious, how did you get involved in this? - Well, I first studied psychology back in the day, and I actually went to work in industry for many years at applying psychology and industry. So I worked for companies like Kellogg's and Pepsi and Frito-Lay and Miller Beer. And I worked in an area of consumer behavioral profiling, which is basically figuring out what do consumers want and how do we offer a product to them? While I was working in industry, we were part of, I was a senior vice president of a company called EMAC and it was the beginning of the internet as we know it now, in the nineties was talk about social media and a friend of mine who is a developer, came up with an innovation which was one of the first commercial chatbot, and it was called Jabberwocky.com. And I was fascinated by it and people can check out Jabberwocky.com and it it's a companionship bot. And I thought it was captivating, I thought, "Wow, this is incredible." People could talk to this AI, and they could talk to each other online, this could be terrific for people who are suffering from social isolation, or people with specific learning difficulties, and then I stopped and I thought, well, maybe not. And this was 92, 93 so prior technology, as we know it now and prior to social media, but I thought it was compelling and captivating technology. So I went to look at the literature and I came across the work of professor John Suler, who at the time was writing about cyber psychology, and it was just that moment for me, I thought this is the future. I've seen the future and it's called cyber psychology. So I went rushing back to my professors of psychology to say, look, there's this new world coming and it's technology and it's computers and it's chatbots and social media and, and this was moving in towards the late nineties, when this was becoming more and more prevalent. And they said, cyber hocus-pocus, they said humans will never communicate like that. They maintained we're hardwired for face-to-face interaction, but I saw it differently and I decided to go back ,to leave industry, go back and re-qualify. So I did a masters of science in cyber psychology, and then back to back, I did a full PhD in forensic cyber psychology, and then followed that up, with a couple of post-docs in cyber criminology, in information communication technology and in network science. So at this point in time, yeah, I'm probably one of the most qualified people in this space, at this point in time, probably because I saw it so early on. - So I mean, let's kind of walk through that, cause I think this is a very relevant topic certainly in today's era of lot of extremism that fosters online, before we kind of get into the extremist elements and certainly how human behavior changes online, I guess it'd probably be helpful to provide an overview really of cyberspace as it relates to psychology. - Sure, so as cyber psychologists, we maintain that human behavior can fundamentally change, mutate or change in cyber context. So you've got key constructs such as ODE, the Online Dis-inhibition Effect, which dictates that people will do things in a cyber context, that they will not do in the real world. You've also got anonymity, as a powerful psychological driver online, and it's funny when you look at the pros of anonymity and the cons, the negatives, people don't like having that discussion. And very often, you'll have heated debates, academic debates and, and some argue that anonymity online, is a fundamental human right. Well, no, it's not. It's an invention of the internet and it is equivalent to a superhuman power. And that is the power of invisibility. If you think about mythical characters and magic cloak, the power of invisibility and the problem is for humans, that power of invisibility comes with great responsibility. So as we engage with technology, it's not a transactional relationship. What does that mean? When you pick up your phone or when you log on, you are now heading into a powerful, psychological space, a space that we describe as cyberspace. You get time distortion effects, you get these changes in human behavior. Like for example, did you ever log on to check your emails for five minutes, before you knew that a taxi was coming in 30 minutes and you were going out to dinner? Back in the day when we could all go out to dinner? And all of a sudden you find that this time distortion, the 30 minutes just goes by just like that, as if it was 10 minutes. So you're in this compelling immersive environment. And in this cyberspace, we see that human behavior is evolving at the speed of technology. Kids are growing up in this space, when you know, you see an infant with a smartphone in their hand, or a toddler with an iPad or a young kid with their computer, they are in a psychological environment. Now we know from the work of people like Proshansky in the 1980s, classic psychologists, environmental psychologists, there's a whole body of literature and learning about the impact of environment on human behavior. There is very little literature or knowledge or learnings, about the impact on human behavior of growing up in cyberspace in a developmental context, or how human behavior is mutating and changing in that environment, for example, in terms of cyber criminal behavior, loan or organized cyber criminals, in terms of sophisticated threat actors, and in terms of state sponsored or state condoned threat actors. All of these entities, human entities, all of them, their behavior is evolving and changing at the speed of technology. In 2016, NATO ratified cyberspace, as an environment. We've been talking as cyber psychologists about cyberspace as an environment for about two decades. But NATO ratified it as an environment, acknowledging that the battles of the future will take place on land, sea and air and on computer networks. And this is really important for industry to follow that military example, if you will, and think about cyberspace as an environment and think about how your employees, how your business operates in this environment, and also think about threat actors in the environment of cyberspace. - From a psychology perspective, why do people feel the need to say one thing? And I think if you were to question them, on the same thing that they said online in person, you would almost have radically different answers. - I think first of all, online can tend to be a lean medium, let's talk about a much older piece of technology, that's been about 2 million years in development and that's the human brain. As a species, we're struggling to adapt to our behavior mediated by technology. So lots of cues are missing, the sort of cues that we would have face to face. Therefore, people may say things in a cyber context, that they would not say in the real world. Or they're not getting the same feedback, they're not seeing that micro expression, that flinch of pain or disappointment, the nuance behavior, the body language in the same way. Odor, that proximity, our cognitive processes are finely, highly tuned to process information, to process other humans in real world context. Now as a species, we've got to learn to adapt, and when I see problematic or negative behavior online, I conceptualize it not as addiction, and there's a whole debate around addiction, because you cannot be addicted to air, you cannot be addicted to water, because the model then is abstinence. We can't abstain from technology., we have to learn to live with it. And in doing so, we have to conceptualize problematic issues online, as, as playing catch up. But you know, the behavioral sciences have been blindsided by, to a certain extent by these rapid evolutions in technology, and arguably some behavioral scientists are poorly placed to help advise industry or to advise policymakers, in terms of how we can tackle these issues, how we can actually ensure that we have better cyber society, how we can work towards a collective greater good in this domain. - So we as Nisos, you're on the advisory board, strategic advisory board for Paladin, you know, as it's been propagated. We're very fortunate to have Paladin as our investors, with Columbia. You know, certainly part of the alignment that we had with Paladin, is that ultimately cyber security as it's defined, is not just about confidentiality, integrity and availability, which I think a lot of bread in cyber threat intelligence, really gears itself toward, you know, cyber intelligence, managed intelligence, really gets into that psychology, really gets into almost those style of metric attributes if you will, of everything that's happening, kind of online. Would you agree or disagree with that? And then ultimately, I guess the question is, what do we kind of need to do in terms of online safety technologies that are potentially types of solutions? - Yeah, really good question, because we're at this really exciting point in time, where there's this new emerging area and at Palatin, we have an investment thesis around online safety technologies or what we call safety tech. We're conducting some research at the moment, and that'll be published next month I believe. And effectively, it really comes down to a gap in the knowledge and also a threat gap from a client perspective in that, to date we've had everything that cybersecurity can do for an organization, in terms of the focus on securing data, the focus on securing networks, focus on securing systems. But cyber security in its own right, does not focus on what it is to be human. Therefore you know, at Palatin, we started looking at this area very carefully and I also work as an advisor to the UK government, to the department, their DCMS, who also began to investigate the concept of technology solutions to technology facilitated problem and then deep criminal behavior. And we did a research report in the UK last year, which investigated this new emerging sector, the online safety technology sector. Also in the UK, there are legislative proposals, around the concept of online harm. So for the first time joining the dots, to create the spectrum of harm. I have been arguing this for over a decade. The relationship between harassment, the relationship between cyber bullying, harassment, mis and disinformation, fake news, the spectrum of online harm. And effectively, online safety technologies are as we describe them, safety tech address harmful behavior online in terms of human factors. So yes, we want our data, our systems, our networks to be secure, but we also want the humans who use and operate those systems to be psychologically robust, resilient and secure. And I believe it's the combination of safety tech and cyber security that ultimately will deliver on protection in cyber contexts. So I think it's very exciting for Nisos because, what I've read about your company and what I know from talking to various people within your organization, is that you have this intuitive understanding of threats from a human perspective, whether it's insider threat or whether it's continuing dis or misinformation online. And I think that, when we talk about these concepts we're talking about human behavior in cyberspace and we're talking about abstracts like mis and disinformation. And I think for a long time, it was very difficult for people to understand, well, what does this really mean? And unfortunately, we've had an example which is really the first, from my perspective real world exemplification, of what happens when you have fake news, distorted truths, mis and disinformation, really running rampant online and that's spilling over into real world harm and that would be the events of Capitol Hill. We talk about filter bubbles, and we talk about AI and the impact that has on human behavior at the level of the individual, in other words, the psychology and also in a social context in terms of the group. So if you like to work out and you get caught in a filter bubble and you get your gym membership and protein powder and Nike trainer is being proffered to you, nothing's going to happen other than you're probably likely to get fitter. However, if you get caught in a distorted filter bubble where your beliefs become more fanatical, where the truth becomes more distorted, where you fall victim to mis and disinformation, that socialize and normalize your abnormal beliefs, then that is extremely problematic, not just for the individual, but for society. And the problem with human behavior is, it is not like a tap, you cannot simply turn it off. If it takes four years or five years to become radicalized, it takes equally as long to de-radicalize. - Coming from industry, which you come from and had the background that you have, from the cyber psychologist perspective as well as working for fortune 500 companies working in marketing, you understand the importance of brand, understanding that not only employees of a company could certainly get wrapped up in these types of disinformation circles, but certainly their consumers can and all of this can have a brand effect. What would be your guidance ultimately to combat this? If you were talking to the CEO and the C-suite of a fortune 500 company. - I think the first thing that these companies need to focus on, in terms of the integrity of the brand is, is what we describe as cyber situational awareness. You have to understand and be knowledgeable about your brand and your reputation on surface web, deep web. You have to protect that, in both of those domains, but additionally, you have to be cognizant of your employees in a real-world context and online. At the moment, many corporations, most of the blue chips, will deploy psychological psychometric testing, because they want to know who their employees are. But in terms of reliability and validity in a scientific context, are we measuring what we purport to measure? If human behavior changes online, then you need to deploy, what we describe as, cyber psychometric testing. You need to know who your employee is in the real world, you also need to know who they are online. And this is fundamental in terms of specific threats, such as insider threat and post pandemic, during the pandemic, and hopefully soon post pandemic as we move toward this new world, where we'll have an increase in remote working, because we've had an extensive trial period to show that it works, where we'll have the, the development of the hybrid offers. It's going to become increasingly important, to know who your employees are in this technology mediated environment, and increasingly important for the C-suite to develop cyber leadership skills. Leading in the real world and leading remotely. And, can I just wrap up with something Landon? We're all in the pandemic at the moment, it's terrible, it's tragic, it's got an enormous cost in terms of human life and stress. and just one thing, just to focus on one piece of good news, as a result of the pandemic, I just want to talk a little bit about psychological resilience and hardiness, two really important constructs, that are part nature, part nurture. You're born with a predisposition or a certain amount of psychological resilience. The good news is, the only way that you can enhance or develop or practice that resilience is through adversity. And one positive outcome of this period that we've been through for almost a year now, is that most of us, will have developed and enhanced our psychological resilience skills and we will benefit from that. Just some good news to finish up with. Mary, you are officially the most interesting person in the world, in my opinion. We appreciate your time today. - For the latest subject matter expertise around managed intelligence, please visit us at www.nisos.com. There we feature all the latest content, from Nisos experts on solutions ranging from supply chain risk, adversary research and attribution, digital executive protection, merger and acquisition diligence, brand protection and disinformation, as well as cyber threat intelligence. A special thank you to all Nisos teammates, who engage with our clients to conduct some of the world's most challenging security problems on the digital plane and conduct high state security investigations. Without the value of the team provides day in, day out, this podcast would not be possible. Thank you for listening.