The Security Champions Podcast

Spandana Sarala Gorantla - Scaling Security: How AI and Collaboration Transform Threat Modeling

Dustin Lehr Season 4 Episode 4

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:02:54

Spandana Sarala Gorantla is a Senior Product Security Engineer at Adobe, specializing in product security, threat modeling, and secure development practices. She is passionate about making threat modeling collaborative, practical, and scalable, especially as AI and agentic systems reshape how teams build software.

Spandana joined The Security Champions Podcast to discuss why threat modeling matters more than ever in the age of AI. In this episode, she shares how threat modeling became a central part of her security career, why collaboration across engineering, product, business, and security teams is essential, and how AI can help scale early risk identification without replacing human judgment. The conversation explores practical approaches to threat modeling, the role of Security Champions, and why frameworks like STRIDE and MAESTRO can help teams ask better questions about modern systems.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Podcast sponsored by Security Journey, Secure Coding Training for Developers and Everyone in the SDLC. Learn more at securityjourney.com

FOLLOW US to stay up-to-date with new content!

Get your free VIBE Coding Field Guide: https://hubs.ly/Q043-zdS0

SPEAKER_02

The Security Champions Podcast is brought to you by Security Journey. We help enterprises reduce vulnerabilities through application security education for developers and everyone in the SDLC. Learn more at Securityjourney.com.

SPEAKER_01

Hey everyone, welcome to another episode of the Security Champions Podcast. I'm your host, Dustin Lair, and I'm here with Spandina Garantla today. How are you doing, Spandala?

SPEAKER_03

Good, very good. How about you?

SPEAKER_01

I'm doing great. I'm excited to get into it. We have an amazing topic today, threat modeling, probably no surprise. But before we go into that, I'd like to have the audience get to know you a little bit more. There's anything you want to share about yourself, your background, how did you get into cybersecurity? Why would you do such a thing to yourself? Whatever you want to share.

SPEAKER_03

Oh, yeah, absolutely. First of all, thank you for having me on, Dustin. I'm super glad to be here. Um really appreciate it. Um, yeah. Um, for those of you who don't know me, this is Panna Gorinthla. I am currently working as a senior product security engineer at Arubi. Um, how did I get into cybersecurity? I think we have to go all the way back to 2016. Um, I was in my final year of my bachelor's degree. So I was actually in hardware. I was not from a software background, so I was kind of doing electronics and communications and all that stuff. Uh, but then I really wanted to get into the software side of things because computer science has always been interesting for me. Um, so Deloitte was actually looking for a couple of roles. One is on the data science side of things, and the other one was on the cybersecurity side of things. Um, data science, just by the name of it, I was like, okay, this is all about data, looks like, but cybersecurity had no idea what that meant. So I was very intimidated just by the name. So I applied for data science, and as part of the interviews and stuff during a group discussion round, the topic was software development lifecycle. And somehow, unknowingly, I talked a lot about testing, identifying bugs, issues, what happens after production, and all that stuff. So the moderator was like, Span, I think you are more fit in cybersecurity more than the data side of things. And I was like, I'm just looking for a job. So I mean, if if that's what you think, let's let's try it out. Then I had a few interviews, some technical questions here and there, a little bit of coding, but finally I was in. Um, so that's actually how I got introduced to cybersecurity. So it wasn't necessarily like a technical role though. Um, it was more on the auditing side of things, but at least that that was my entry point into like the whole cybersecurity um side of things. So I really wanted to kind of get more technical. So that's when I decided that I'm just gonna do my higher degree, like my master's, and I ended up in the US in 2018 for that. And yeah, I graduated from University of Maryland, College Park with a cybersecurity degree, and then yeah, since then never looked back.

SPEAKER_01

Wow, fantastic. So it is always interesting just to see people's path, right? It's sometimes windy, it sometimes, you know, takes these tangents off course, right? And then we find ourselves wherever we are. So it sounds like that happened to you as well. Um in terms of the data science side, like at least you were somewhat intrigued by that. I'm kind of curious, do you work that into your job nowadays? Like, how does that sort of overlap with cybersecurity, if at all, in your mind?

SPEAKER_03

That's a good question. Um I usually get overwhelmed when I look at a lot of data, but these days I'm kind of getting better. Um, because my husband is actually a data engineer. So um he's actually giving me, he's been giving me a lot of motivation to kind of look at data in like interesting ways. Um, because I'm I'm definitely like doing some data analysis and things like that at my work to see if I can find more systemic patterns in these um applications when it comes to security issues, or are there any like repeated vulnerability patterns that I'm seeing which we can kind of solve at scale and things like that? So I'm definitely kind of paying more attention to like the data and the analysis and all that stuff compared to like my initial career. Uh, but I think at that point, I obviously didn't have any of this idea, right? Pretty much like a data science looks like you're just gonna have to deal with data. Um, so at least that part I understood. But cybersecurity, that seemed like, oh my god, this is like a super technical field. I'm and I'm coming from a hardware background. So I was kind of probably intimidated and overwhelmed just by the term. Um, so yeah, it makes sense.

SPEAKER_01

I mean, for me, I'm I'm a data nerd. I haven't been able to escape dealing with data, as you put it, uh, in my career. And I have found it to be very useful in cybersecurity, right? To help leaders and others across the organization understand the impact of cybersecurity, more or less. Not to mention it sort of bolsters our ability to find where things might be going wrong, right? Where there's some sort of spike or something uh wrong with the data that we need to look into more. Well, fantastic. So uh, you know, like I mentioned before, the topic here is going to be centered around threat modeling. And that brings me to remembering where we met, uh, which is last November at ThreatModCon. You were on a panel there, and I thought you did a fantastic job just explaining what is threat modeling, what you know, what is it to you, what we need to do uh about threat modeling in the future, especially in the AI world that we're in today. So why don't we start with just some of your initial thoughts about threat modeling? What is it? Why is it important? How does it apply today?

SPEAKER_03

Yeah, no, absolutely. Um I think in my own words, threat modeling is just one of those ways for us to understand these systems deeply enough so that we can kind of predict where things can go wrong. And honestly, in my opinion, you don't need even need to have that much of a technical background or knowledge to even like start with threat modeling because pretty much anyone can start with threat modeling. Um, in my in my opinion, threat modeling is pretty much all about collaboration and asking the right questions. It doesn't really matter what type of framework you're using, what type of threats are we identifying, how do we mitigate that. I think the start of all of this is basically a group of folks getting into a room and talking about the system that is in scope and how things can go wrong or where things could go wrong and what do we do to actually prevent that from happening. Um traditionally, um threat modeling has always been there in the software development lifecycle. But yes, the maturity levels kind of varied and whatnot. But I think with AI coming into the picture and with a gentic systems now, in my opinion, it has become even more important than ever. Um, because at the with the pace that companies want to move, because we have AI at hand now, which is a great tool, um, you want to launch more products, you want to launch more features, you want to like keep your customers happy. But at the same time, what about security? Um traditionally, a lot in a lot of situations, security has always taken a back seat. But I think it's high time that we kind of bring it um forward because there's so much going on, and customers are also looking not just at the product or what you're offering, but also the trust uh that they have in your products. Um, so yeah, long story short, I think threat modeling is something that we all have to do. And we, in fact, we are all doing anyway in our day-to-day lives. We just don't notice it. But it's definitely more important than ever.

SPEAKER_01

Yeah, I love that perspective. We are all doing it, and one of the examples that I like to use is think about your house, right? Why do you lock your door at night, right, before you go to bed? Okay, you're sort of actively thinking about threats and threat modeling uh when it comes to just minor day-to-day things as well, driving your car and so forth. So I really like that mentality. Maybe you can help us connect the dots here in terms of your career, right? So you got into cybersecurity, sort of took that turn. How did threat modeling become part of your focus and and it sounds like your passion as well?

SPEAKER_03

Yeah, definitely. Um, so that's another interesting story as well. So I got introduced to the term threat modeling actually in my master's um 2018. But then I did not really like pay much attention to it because I I don't think things really clicked for me back then, especially when it came to threat modeling. So, like a lot of people, I started my career as a pen tester and I was doing uh this pen testing and a lot of applications and stuff like that. But one fine day, um, like you've already heard about it, Justin last year, uh Dustin last year. So um it's basically at a point where I had to lead a few threat modeling discussions because there the seniors weren't available, they were on PTO. So this was like the first time I ever got introduced to threat model in like um in a practical way. Um, so I was obviously a little overwhelmed or intimidated when I first got into these conversations because I had no idea what threat modeling is and how do I even approach it. And there's all these senior development, uh software development engineers in the room, and how do I like, how do I not look stupid? Was the goal, right? Um, but then once I started having the conversation around, okay, what does the system do in the first place? How is the data flowing? What components do we have? So what is the scope and all of that? They started pitching in some ideas as well. They were like, oh, we believe that this particular component is not built the right way because it has some legacy technology or whatnot. So we really want to kind of focus threat modeling on this piece of um item or whatnot, right? So that kind of made me more curious. Okay, like I asked all these really good questions and I have some responses, I have some data points. Now, where do I go from here? So that's what I actually started learning, and it made me more and more curious as I as I went. I think the what really pulled me into the threat modeling side of things is just the collaboration part of it. Um I think it's really cool for a group of folks to just sit in a room and talk about how systems can fail or where they could fail, right? Um, and I've seen, like I in my experience, I've seen a lot of discussions where it was just not technical people who were part of it. There were sales, marketing, like these non-technical folks who also contributed to these discussions. Yes, there could be a point where they might not be able to share what can be done to kind of mitigate a certain threat, because again, you need that technical knowledge, but at the same time, the perspectives that they provided from the customer side of things and the business context provide that they provided kind of helped us with the prioritization strategy, um, especially when it came to like these mitigations-related work, right? So, all that to say, I think um collaboration is one thing that I really, really enjoy um in the threat modeling arena of things. So I always look forward to have these discussions with folks and bring in as many folks as I can into the picture. Uh, but yeah, that I think it was what, four, five, six years ago that I first got introduced to threat modeling in the industry. Yeah, and I've never looked back since then. I'm thoroughly enjoying it.

SPEAKER_01

That's fantastic. I think being inquisitive was a really good way to get yourself into the space, right? You're just asking good questions to the people that know the details of the system. Okay. Right. And I like what you're saying about collaboration. I definitely want to go into that more. Um one comment that you made is it's, you know, it's interesting. It's it's a very um almost fun thing to do, right? In my opinion. I'm a nerd as well, though, just like you about threat modeling. Um and uh, you know, sitting in a room and talking about how systems can fail is the way that you express that. Do you also find that there's a lot of discussion around how the system even works to begin with? Like even between the the, like you mentioned, the product folks, the business folks, business analysts, right, other folks that may not be technical. Um, but like in my experience, I see that happen a lot. An engineer will say it works like this, and products like it's not supposed to work like that. You know what I mean? Or the engineers even disagree. You know, it works like this. Uh, it's not supposed to work like that. Like, that's not how we designed it, right? So I'm just curious if you've seen uh situations like that where there's some disagreement where maybe people are learning about the system itself before even talking about the threats.

SPEAKER_03

Yeah, I think that's kind of like that's pretty much um a part of every threat modeling, like the typical threat modeling conversation that we usually have. So there's definitely a lot of disagreements going on, people learning on the fly. Um, like you mentioned, like a few folks are, oh, that's not the intended functionality, but then the person who actually developed is like, oh, that's how I implemented it. Um so I think I think there's two pieces to it. So one is how how does the security change, uh security posture change based on how it's implemented versus what it's supposed to do? If there is a huge difference between the intended functionality and how it's implemented, then I'm sure there's going to be a lot of security repercussions because that's not that's not the intended business functionality, right? But at the same time, there's also another piece to it where you learn the system much better as a security professional, because now you're able to understand what what all other things that could go wrong with that system, because there's more business context at this point. It could be as simple as yes, I intended this with an idea that this has to go through a third-party payment provider and the payment has to be done by the user, right? And then there's let's say there's another person who said, yes, it's supposed to go through a third-party payment provider, but the completion of the payment is something that we do it on our side rather than them. So that's where things get interesting. Because, okay, what if now I actually trick the system into completing the sys uh completing the transaction on both sides? Does that mean it's multiple transactions for the user? So I think those kind of edge cases really kind of stem from these disagreements that people are having, which is a really good data point for us as security professionals, but it's also a really good opportunity for everyone to come together and really kind of align on what we want to deliver to customers. And is that so whatever we developed, is that something that we really want? Uh is that some is that something that customers really want or asking for, right? So I think that's a really good point that you brought up. So threat modeling is not just for security professionals, point-proved again. It's for everyone to kind of align on the overall product strategy, the features, the implementation before we roll out to the customers. And I think I'll just add another point and kind of close it here. So I think that's why it's even more important that we actually do threat modeling as early as possible in the SDLC lifecycle. Um, because that's where all of these disagreements, the questions that come up, they're much more less expensive to address versus something that's everything's implemented and it's ready to go to the customers. And now you come to a threat modeling discussion and you have all these disagreements, then how do we deal with that? So that's another interesting scenario to think of. So yeah, yeah, so definitely um that was a good question.

SPEAKER_01

Oh, absolutely. I have more good questions too. So hang on.

SPEAKER_00

Looking forward to it.

SPEAKER_01

Yeah, no, I agree. I think this should happen as early as possible, just to your last point. Um, I think some of the challenge is how do you make a case for this? And I think if we get into some details there, it would come down to how do we end up scaling threat modeling across an organization. I would like to get there. I have it in my notes here. We will come back to that, but I do want to double-click on the collaboration piece, especially what we were just saying around the value of threat modeling for not just security, right? It's also valuable for the engineers. Do you think they feel that way? Like it's almost like what I've run into sometimes is, you know, you're taking an engineer away from doing the actual work, right? They want to do like heads-down work, they want to be slinging code all day, right? And you're asking them to join a discussion instead. How do people respond to that? And do you find that people are excited, accepting of that as an activity? And then how do you find that they sort of come out of it? Is it an annoyance, you know? I'm glad that's over, or is it, you know what? I'm really glad that we did that. What's your experience?

SPEAKER_03

Yeah, yeah, definitely. So I think there's again multiple sites to this. So it kind of also depends on the culture that we are building in these companies. Um, like if we treat threat modeling as another checklist item, which is how it's not supposed to be, like it's not supposed to be for sure. But then when you start treating that as another release readiness checklist item, um, it's going to feel a little more burdensome and it's not really um considered as part of a free-flow discussion with security folks so that we can all make this product better. It's kind of seen more in the perspective of, oh, I need to get this done with, or else I'm not going to release this. But my career is actually dependent on this release because it's a major release and I've spent a lot of time, and there's performance reviews and all of that happening in the background, right? So there's that internal pressure that a lot of people have, especially when threat modeling is looked upon as another checklist item or a compliance exercise, right? But I also say this a lot of times we as security professionals have these conversations in a way that, yes, you did not implement this right. So now we'll have to go back and redo everything that we did. So that's that's not how the discussions are supposed to be. The whole point of threat modeling is we as a company or as a group or as a team identify where things could go wrong so that we can plan them better. Yes, we definitely don't want to pull development folks out of their, say, focus zones or um their coding exercises, right? But they are our champions to actually incorporate security into the design or the implementation. And we would love to do that as early as possible so that we don't cause all these disruptions to their flow. Um so that being said, I think, at least in my opinion, I'm in my experience, I'm seeing a little shift towards having people wanting to have these discussions. Um, as compared to previously, I don't want to have it, but I still need to do it because it's more like a compliance exercise, or else the audit folks are going to be on me. Um so, and the reason I'm seeing that shift is because AI has been overwhelming for a lot of people. Um, like at the pace at which things are evolving. Um, when MCP first came into the picture, not a lot of people were even aware of what MCP stands for. But at the same time, the leadership really wanted people to start using MCP, um, spinning up these MCP servers to make things easier and better for everyone. So that's where I actually started people proactively reaching out to security to talk about okay, how do we even like work through this thing? What are some of the things that I need to keep in mind when developing this? Because tomorrow I develop this and there's a huge security issue that blows up, and everyone's going to be on me as to why didn't we talk to security about this, right? Which I want to avoid. But at the same time, I want to make sure that we're doing this as seamlessly as possible. So I'm kind of definitely seeing that shift, especially with AI in place. And I'll also quickly add this. And the fact that everyone can use AI, um, regardless of you being technical or non-technical, it doesn't really matter if you understand what threat modeling is or what threat modeling is not. You can just go to AI and be like, hey, I'm implementing this MCP server. Uh, what what are some of the things that could go wrong? Right. So because it has become that easy and accessible, like previously, if you need to talk, if you needed to talk to a security professional, you probably have to get into a queue. Of thousand reviews or thousand tickets, and then you're just gonna wait until they they that's chart they work on your ticket, right? But now that's not the case. So I'm definitely seeing that cultural shift um in terms of people proactively reaching out and talking about security. Uh, but I think we still have we s there's still more work to do um in that area.

SPEAKER_01

Yeah. Yep, I completely agree. And it it's not, I'm glad you're seeing that. So I'm seeing that as well, where people are almost shifting their mindset to earlier stages, right? How do we actually create AI-generated code at the end of the day? Well, then we're clearly going to need to very much focus on the design. Yeah. Like we can't, like in my mind, the design as you code method, which I think a lot of engineers end up using, yeah, uh, is a little less possible with AI, right? Like they but at the same time, like what you were saying around collaborating with AI as you're building things is also important too, right? Like as you're designing the systems, asking your AI assistant or agent, whatever it is, to offer input in terms of what could go wrong here and sort of work through the four questions, right, with your with your AI model. Um, so I think it's an interesting point. Why don't we stick on that for a little bit? Because a lot of people are talking about AI right now, and a lot of people are thinking about how to best utilize AI, right? You've already talked about it, threat modeling as a collaboration opportunity, right? Um, so how does AI play into this? Like if you sort of had your way, paint a picture for us. You know, where would AI fit into the SDLC when it comes to design and threat modeling in the future, in your mind?

SPEAKER_03

Yeah, absolutely. Um, especially just talking about the threat modeling side of things a little bit, and I can kind of talk about the SDLC as a whole as well. So I'm I'm I love AI. I love using AI at work. I'm a huge proponent of using AI at work. Um, because and I'll I'll tell you this, I'll tell you why. So previously, like traditionally a few years ago, when we were all threat modeling as an industry, um, every company has the exact same problem. We only have like five threat modelers at our company who is serving pretty much the entire company, and there's thousands and thousands of tickets waiting in our queue just to kind of get in, get their threat model reviews done, right? Um, it's not scalable, it's completely broken because by the time you actually get to that ticket, they might have released like long ago. Um, yes, there's still some point of threat modeling at that point, but at the same time, the whole point of threat modeling is being is to like being able to identify issues early, uh as early as possible, right? So that entire system was not scalable. It's in my opinion, it's it seems to be like broken, because obviously we were pretty much doing it based on the bandwidth, the number of people or whatnot. And we tried automating everything, which was not the the the ideal way of handling things, because to what extent would you automate um stuff? Do you want to automate the ingestion of design dog? Do you want to automate the review of the code? Or for that, you would need some type of intelligence. And once AI um came into the picture, I think that's something that I'm really enjoying. So one of the biggest benefits that we are seeing is AI is really helping us scale. Like, like I mentioned previously, like today, if I need to previously, if I needed to threat model, like, say, thousand um features just by myself, it would have taken me years. Now, since everybody has access to AI, um everybody can just go chat with it to see what where things could go wrong. That's a really good start before even the development begins. And another way that we've been using AI is let's say the teams did not even think of accessing AI, asking AI where things could go wrong and how what to keep in mind from a security perspective or whatnot, right? So let's say traditionally, um, in a traditional way of doing things, they came to us for a threat model review, and that's where we want AI to kind of help us scale. So they think of it, think of something like this. So they go to an AI system uh or an AI tool and they basically give their design doc or whatever they have at that point, could be an architecture doc if they're a little more matured in the process, or it could even be a code repo if they even have some code at that point. And then let AI kind of take a look at it and at least give them some pointers as to okay, these are some of the areas that you want to keep in mind as you're developing your application or you're designing your application. And then if the risk seems too bad, if the risk seems too high, because there are some custom implementations that are being done, the authors, a custom author, whatnot, right? Um, or there's super critical sensitive information that is being processed by the system. So, in cases like that, we want that AI system to pull humans into the picture. And that's where we get into the whole collaboration exercise to see how we can make things better. So I think AI is really, really good with scaling, but something that I would say where we would still need humans and where humans would add a lot of value, even today, is the contextual part. Because now think about it. AI can give you all the threats that it can see that would apply to that particular application or system. But it's going to, at least today, because the ecosystem isn't that great at this point, um it's not going to have the business context that you do. It's not going to be wearing different hats and being able to like share everything at a go. You'll have to keep prompting it. Um, it's not going to have, let's say, someone like a sales professional who's talking to a customer, who's spoken to a customer, who has some context on what this functionality is for, what's the purpose of it. Um, and it's not something that then AI can directly have. And on top of it, um, there's all these applications that you're seeing. And if you think about it, every company has its own type of security culture. And when I say it's not necessarily the people culture I'm talking about, the product culture. So there's the there's the the language is a little different in every company. The way people understand things is a little different, the way people implement things is a little different. So I think putting all of that together and having that contextual background can really help you refine the threat model that AI generated to see what exactly matters to that particular team and the product that they're building. Um, so like the other day, I quickly like used Chat GPT to like generate a list of threats for a system. Yes, it did. It actually did a lot of threats, but at the same time, I had to still review each one of them to adjust the priorities, to see what matters, to understand what additional context do I need to give to this particular team because I do have some history of working with them. I understand how kind of they kind of work and what are some of the ways that they try to implement certain things. So I needed to add that additional context to make it more actionable for these teams. If I would have just copy-pasted the AI threat to these teams, it would have been really overwhelming, just like how we used to give all these tons and tons of findings that the SaaS tools used to report in the back. Um, right. So I think it's definitely really good for scaling. We should all be leveraging it as a really good starting point. But I think humans still add a lot of value when it comes to contextualizing them, prioritizing them, and really understanding what's important versus what's not.

SPEAKER_01

Yeah, that's very well said. I completely agree. I think there's a lot more context that AI can bring in, right? We mentioned uh MCP earlier, keyword, right? Context. Uh I think that's extremely important. Um but I love your point as well around the fact that humans should still be there to help direct the output, you know? Uh and actually, I was just, as you were talking, I was I was trying to figure out how to frame this. And I think where I'm landing is even just instead of humans in the loop, it's actually humans in the lead. Right? They should be leading the conversation, prompting, right? Figuring out how they want to prompt, you know, examining the output, sort of uh taking the output and putting it into what it should be for their fellow humans to understand, right? There's no way that an AI would be able to do that far into the future as well. So I think you very accurately answered where we would be in the future, you know, when it comes to humans' role in threat modeling overall. I mean, it's exciting, but it only goes so far, is sort of the point. And if you just sort of hands off, you know, let it autopilot and do whatever it wants and give the results to whoever without you being there, I think that could cause quite a bit of confusion. So completely with you. Um I also love what you said around helping scale threat modeling, right? So the couple of terms that came to mind were having some sort of initial screening happen based on a new design, right? Here's here's the new design. You you sort of ask a few questions to AI, and then it can help determine if a human needs to be much more in the loop, even more than or in the lead, right? Much more than we even talked about. So I'm curious about your thoughts. Uh, how do you how does this look in terms of the governance of scaling this? Like we talked about SDLC before. I know that you have a lot of experience in that area. How do you sort of design a process around this? Because what I'm hearing is we can certainly put AI in the hands of many people across the org to scale threat modeling, but how do you ensure that's happening? And how do you sort of set up these criteria that we're talking about where you know there's a certain indication that something should go to a security person, right? Or not? Like, how do you set all that up? Would you leave it in the hands of individual engineers? Would you provide guidelines around all that? Would it be like a gating process? And if so, how would you prove, right? Like what evidence would you need that somebody went through that? There's a lot in that question, I know, but I'm hoping that your answer was will will sort of add a lot of uh depth to what I'm asking here.

SPEAKER_03

Yeah, no, yeah. A good question again. So I think the way the way, at least in my experience, what I'm seeing is a lot of these companies are actually it all starts with a design, right? An idea, actually, if you think about it. So the so what we want to do is kind of capture those moments when someone's ideating and really kind of want to start involving them in the whole whole threat modeling process right from there. So, yes, we can't just keep adding or keep giving them the sort of compliance exercises at every level or um at every stage, because that's gonna be annoying. And with the pace at which or the velocity at which we want to move today in this world, it's just gonna slow them down and then forget about security, the competition is gonna eat us, right? Um, so I think there's definitely a balance there, an interesting balance to kind of um achieve. But at the same time, that being said, um, so some of the ways that I've been seeing companies adopting this is having some sort of release readiness process. Like let's say you have an idea and then you convert that into a design. And when you actually do that into convert that into a design or an architectural idea, um, then you basically, so at least our company has like a process where you basically pull in some stakeholders and then you start discussing about this idea or the design that you have, and that's when the collaboration starts. And security is actually one of the key points of context to be involved as part of those discussions, which is again, yes, it's kind of looks like a checklist item, but at the end of the day, it's again, it's not just security. You basically pull in everyone to kind of understand how do you navigate all these processes that you have at your company to be able to release that feature fast enough, but also not just fast enough, but also like in a secure fashion, right? So it all starts from there, and that's where we start introducing them to the idea of using this AI tool to go start putting your documentation or the idea. It doesn't matter how vague it is at that point, we keep improving it as we go. But okay, so now that we had this initial design discussion, and since it's kind of approved by the leadership, let's start having these conversations. Um, so they're going to go to this tool and start initiating the threat modding process. So the evidence or whatnot is basically you going and submitting that information to this tool and the tool giving you back something which is either the risk is pretty low. So start with these threats, or the risk is pretty high. So I have generated a Jira ticket for you to kind of pull in a human. Um, so there's now two kinds of evidences you have. And in the case of Jira ticket where you have a human who's going to look at it, yes, of course, there's going to be enough evidence there. But at the same time, on the other side, where there's not going to be any human involved, it's pretty much the AI system giving you a set of threats to kind of start thinking about as you implement that. That's where things get interesting, right? So what we're kind of doing is we're trying to see if engineers can take all of that information, start putting their thoughts on what AI is saying, especially when it comes to threats. It doesn't necessarily mean that every threat that the system is giving you, you have to go address it. There has to be a Jira etquet, there has to be a mitigation um strategy, and you basically show some code evidence that you mitigated it. That's not the idea, right? You basically take a threat, and it's not a lot, like there's five to eight threats that we start with. Um, and you basically take your threats, start adding, putting your thoughts on what you think about the risk that's involved, and we kind of let security champions, um, and you you might relate to this really well. So you we kind of let the security champions take it from there. Like if they really want to kind of spend more time on how the engineers are thinking about uh a particular risk or particular mitigation strategy, we kind of let them um scale for us and take it from there because the risk is pretty low anyways. But if the risk is pretty high, then we already have humans getting involved and taking it from there. And I'll I'll kind of add one point there. The issue with having some kind of gating mechanism, um, which I'm not a huge fan of, because that's where it makes things tricky. Because I've been at companies where uh there have been these gating um strategies where until you complete your threat model review, your pen testing, and you actually resolve all the high severity issues, you're not gonna release it, right? And that's where you're kind of pulling the focus away from the engineers, basically saying that, you know what, forget about security. The idea is to not have any high severity threats so that you can actually go release it. So now I'm going to have a huge debate with my security engineer as to why something's not a high severity. So I don't think that's the really good mindset that we can have as a company if we want to kind of improve our security posture. It's like I said, it's it all comes down to having that collaborative nature. So we really want engineers to talk to us saying, okay, I understand this is a high risk. What can I do to mitigate the risk? Right? I don't want it to be like, I don't agree that this is a high risk. Yes, I mean there could be conversations sometimes around that, which is good, but that necess that doesn't necessarily that shouldn't never come from a release gating idea. So if that if that makes sense.

SPEAKER_01

It does make sense. Yeah, that makes total sense. I think gate gates are scary to people too. Like I think it very much affects the perception of folks to the security team too. Like there's a block, you know, you're getting in my way. I'm trying to get something done. You're blocking me, and this is just annoying, right? At the end of the day. Um before we continue our conversation, we do actually need to pause for a second and hear from our sponsor. So we'll be right back.

SPEAKER_02

Today's episode is brought to you by Security Journey. Our education platform teaches valuable secure coding skills based on real-world vulnerabilities and threats, including OWASP Top 10. Learn more at securityjourney.com.

SPEAKER_01

And we're back. This has been a fantastic conversation already, and I'm uh very curious to kind of double-click a little bit more on what we were saying around utilizing AI, right? Sort of threat modeling with an AI assistant. But I want to come back to some of the things that you said earlier and that we were talking about earlier in the conversation around threat modeling is a very uh useful, beneficial uh thing because it's collaborative, right? And we were talking about how engineers talking and product talking and security engineers talking can produce really good outcomes in terms of people's understanding of the system and so forth. But now we're talking about just sort of working with your AI assistant and and sort of threat modeling through through them. So through them, I spoke to that, like they're human humanoids at this moment at this point in my mind. But how does that align? Like, how can we still maintain the benefits of collaboration, but also utilize AI?

SPEAKER_03

Yeah, no, definitely. Um so there's again two pieces to it, right? One is like we discussed, where you start all of this with AI and then you get some pointers. And obviously, it doesn't necessarily mean that you're you're gonna be responsible for mitigating all of that. So you might need some additional pointers. Yes, you go to AI and it's gonna be like, oh, this is too bad. Um, and then because it's the implementation itself is going to be so expensive in order to take a different route there, that's where they sometimes want security professionals to kind of weigh in. Like, do you really think this is bad? Is there anything else that we can do? How is our company policy looking like? What is our company's take on this particular issue? Like, have we seen such issues in the past? How are other product teams solving this? I think that's where we kind of bring a lot of collaboration into picture back again. Um, because it's not just the person who talked to the AI assistant or who's developing an application and not and the security engineer. Now it's more teams who probably did the same thing, but probably took a different route to kind of mitigate the risk, at least to some extent, maybe not completely, but but that will do as well. Again, it it's all about the balance, right? How do we balance the usability versus the cost versus the security? Um, so in my experience, some of the most interesting flaws that we found was not by using AI. It was definitely by asking some of the right questions, like just people talking about a certain scenario. Um and it was, and there were situations where it was as simple as, yes, we totally forgot to rotate our admin token for like a year. Um, and then okay, that's interesting. Let's start there. Like, how about we start rotating them, say every 60 to 90 days? So that was a good improvement, but all of that, but basically all that took was just a simple question and a simple conversation just among people, right? So I think because previously, and I'll also give this example, right? I I think this is a really good one. So previously people are used to kind of working in silos, which worked fine, but now with all the MCP agentic workflows in the picture, um, I've seen an interesting attack scenario. Where you're able to like chain multiple tools in the background and you're able to cause an interesting behavior. And these tools are not necessarily malicious tools, they are legit ones. But because they are performing high-risk actions in siloed ways, when you connect all of them together, there are some interesting attack vectors that come up. An example is let's say there is a tool which kind of talks to some of the internal systems, right? Internal payment systems, billing systems, or HR systems or whatnot, which has a lot of sensitive information. And there's another tool which kind of lets you email or send a message to one of the outsiders, right? And when you are able to connect both of them, now you're able to, and of assuming that the controls are not in place, right? The right controls, you're now able to take all of the sensitive information and now you're easily able to data exfiltrate all of that. And the problem that I think the major problem there is not having the right controls. It's more of people working in silos and we not really coming together to put that a big picture, right? So it's like there's one developer who kind of had this tool. There's another developer on a totally different team who had another tool, and you connect both of them, there's interesting behavior that you can see. So that's where I think collaboration is still really important. And we as humans, even though we want to use AI as a starting point, which is great, we it's at to some level or to some extent, we still need to talk about these issues or the mitigations or our strategies with other people in among across your company so that you have more ideas coming from each other, right? Um, so yeah.

SPEAKER_01

Yeah, without necessarily, I mean, to do that at scale. So I have a lot of thoughts here in terms of having different tiers, right? Different levels of depth of threat modeling, sort of based on the system, based on the risk, the importance to the business, right? All of this stuff. Um because some of the categories it would fall into is, you know, maybe you can check a governance box that, hey, yeah, I talked to my AI about some of the threats and refine the design this way, okay, you know, or use that tool that you mentioned, right? To identify those things. If it ends up going to the security team, because this is sort of the other extreme, right? Because it's a uh risky system that they're building, um, or you know, it's just a very important system to the business, so it's you know, it's good to spend more time on it. And then something in between, which might be still having a collaborative approach in certain scenarios, but maybe without security's direct support, right? Or maybe they can review the outcomes later or something like that. Like to me, that's sort of one of the ideal scenarios is when you actually have development teams without security having these threat modeling type discussions, you know? Um, so I think there's a lot of potential there. Uh, and I I think you definitely answered my question around how collaboration and AI can still coexist in the new world here. Um, I'm also curious, so I kind of want to get somewhat um tactical with threat modeling because I want to pick your brain on a few things, right? So how do things how have things sort of evolved when it comes to our approach to threat modeling, right? Is the four question framework as an example still relevant today? Should we still be sort of following that as an overall guide? Do we still use stride? Is it, you know, are there other tools, other frameworks, methodologies, et cetera, that you've found right to be, you know, more effective? Like just kind of want to get your thoughts around that.

SPEAKER_03

Yeah, absolutely. Um to start with, um, in my opinion, I don't think you particularly need a framework to threat model, because like I said, threat modeling is all about asking questions. Um, sometimes right questions, sometimes even the wrong questions might help. So it's all about asking questions. Um, but in a high-paced scenario where you really want to kind of have some structure, if you want to have some structure to the way you're thinking about it, sometimes I do. Um, because there are there have been situations where I look at a system, I look at the code, I look at the architecture, and then I'm like, oh, these are some of the things that could go wrong. Oh my God, I'm so excited. I want to go talk to developers. But they're also like the other parts of the system I need to look into, but I'm already like super excited, um overwhelmed with all of that that I found. I kind of start losing structure and I kind of am all over the place at that point. So that's where I need some type of framework to kind of ground me and to kind of um start putting a structure so that I capture what I'm thinking in a more organized way, which makes sense for the developers or other humans that we have, right? So I think Adam Showstack's four question model is amazing. I think regardless of what type of technology you have in picture, you could still go with because I feel like that's not like a framework, it's more of a mental model to keep in mind. Um, it's like, okay, like let's start with, okay, what what do we have? Like what what do we have in scope? That's actually usually the first question, right? What does it do? Like all of that kind of come later. So, like, what where do we start? And then, okay, like what can like where can things go wrong or what can go wrong is another question. So you basically start thinking about when you start asking yourself these questions, um, you sort of become more open-minded, you sort of become more creative, and you're not really like stuck with a certain structure or um or a framework, and you're pretty much you pretty much have endless possibilities to kind of start writing down your what can go wrong type of situations, right? So I think Adam Um Schoestadt's model is still a really good mental model to kind of have. But that said, yes, there are like all these threat modeling frameworks that are available. Um and I've seen a lot of chatter in the industry where people have been, people were a little panicked when AI first came in saying, Oh my god, I've been used to like using Stride for my threat modeling. So is that not relevant anymore or whatnot, right? So, in my personal opinion, I think Stride is very much relevant. I think Stride is really good. It's a really uh solid foundational framework that you can use. But what we are lacking in Stride is basically the mindset that you have when threat modeling AI systems, which is the non-deterministic nature, the fact that agentic systems are autonomous or whatnot, right? So if you really want to use Stride, yes, absolutely you can still use it. Just have those, have that AI-specific lens in mind when you are using Stride so that you can also think along the lines of non-deterministic nature or autonomous nature that these systems kind of bring in. Um, but if you want to kind of take it one step above, which I've been trying out a little bit these days in my work, um, there's something called Maestro, which was released recently by OWASP, one of the OWASP frameworks. Um, so I found it really helpful. Um, this is obviously this is by no means me marketing, trying to market um uh Maestro, but I just found it very helpful, especially when I'm threat modeling multi-agent environments. So there's like multiple agents talking to each other, acting on your behalf. I really wanted some kind of um data points just to kind of approach that in a way where I'm disrupting my traditional way of thinking and now looking at a system where multiple systems are actually taking actions on your behalf. Because I'm like when I threat model, I go, yes, this system is taking action on my behalf. So let me start threat modeling it, which is great. But at the same time, I have to keep in mind that there are other systems interacting with it, and there are other systems that can make action on this particular system. So that's that connectivity is something, that integration is something that Maestro has been helping me with. So I believe that's kind of designed for multi-agent workflows, if I remember correctly. I think Maestro stands for multi-agent environment security, threats, risks, and outcomes. Yeah. So it kind of, yes, I remember all fit. Amazing.

SPEAKER_01

Well done on the spot with the recording lights staring at you.

SPEAKER_03

Yeah, nailed it. Thank you. I'm proud of it. Um but I think where Maestro kind of shines is it kind of takes a layered approach. Um, it starts all the way from foundational models, it touches data, like the non-deterministic nature, and kind of goes all the way up to agentic actions. So I think sometimes it's just nice to have that mental model as you are threat modeling something. But again, like I said, you don't need any of this to like threat model. You can still do a really good job just by asking questions, the right questions or wrong doesn't really matter, just asking the questions. But if you really want some structure on some kind of pointers when you're doing it, yes, frameworks can definitely help.

SPEAKER_01

Yeah, I I agree. I think it's just having that inquisitive approach, right? Like, like sort of how this podcast is going, you know, based on somebody's response, can you adjust what types of questions you're asking, right? To dig into certain things where necessary, but also step back and ask about something else, you know, that we may not have thought about. And I like your thinking here around how do you go beyond the traditional questions, even that we ask, and start to, in the new world of multi-agents uh deployments and so forth, you know, what other things should we be thinking of these days versus maybe we didn't in the past? Where I find stride, Maestro, these types of things useful is for helping brainstorm. Sometimes you sort of just get stuck. Have we, you know, have we covered everything? It's a little bit of like a coverage question, you know? Have have we really covered everything? We really spent a lot of time in this area. Maybe we should step back and choose another acronym letter, right? And really focus on repudiation or something uh this time around. Can you do stride on the spot too?

SPEAKER_03

I think so. Spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege.

SPEAKER_01

Yes, good job. Well done.

SPEAKER_03

Thank you.

SPEAKER_01

Solid. All right. Well, what about what about tools? This is a big question when it comes to threat modeling. To what extent should we use vendor tools? I'm not asking you to necessarily highlight any specific vendors here. Um, but where do tools kind of fit into your ideal approach that we were talking about? And then I'm also curious, like, there's a lot of things on the market right now that are aiming to automate threat modeling. Like you don't have to worry about threat modeling, just sort of give your source code to this tool, this vendor, and they're going to produce the threat models. You might be able to guess my opinion about those, just in the way that I expressed that. But I'm curious what you think about that.

SPEAKER_03

Okay, see, I'm not a huge fan of just automating everything and then let it run, right? Um, we've we've already like we're we're already seeing how that's working in just the whole agentic world of things. Just the other day, I was reading about an attack where there was an agent which was given full permissions on a production database, and when it was trying to do something, it had some ambiguity, so it panicked and it basically deleted all of the production database without actually asking the human and without even checking if it has a backup copy or not, had a backup copy or not. And there was no backup copy. So things like that, it's in so it's nice that we have all these agentic workflows which are helping us kind of automate some of those um tedious tasks, but that doesn't necessarily mean we pretty much automate everything and just sit there looking at all of the automation outputs, right? They could be this catastrophic sometimes. Um, even with threat modeling, I feel like I think this kind of goes back to a lot of people who've been thinking about yes, we have all these coding agents or coding assistants which are pretty much doing all of it, and we are just wipe coding here. Um, so what about my skills? Like, what about my critical thinking? I think the same applies to like even the security parts of things, right? You basically automate every single thing and you're pretty much looking at the outputs. Then what about the human element? What about the collaboration? What about asking all those questions and figuring out the business context and applying that to prioritization strategy? Like it's in my personal opinion, my take is threat modeling is not a report, it's not a checklist, it's not a compliance exercise. Like we discussed, it's more of a collaboration exercise. Everyone trying to make that product better from a security standpoint. And yes, you'll probably have to make some trade-offs along the way. But that said, I think the tools can still help you with the scaling part, like I mentioned, or even just for some non-technical folks to like get into threat modeling just by interacting with these AI assistants or whatnot. Because if you think about it, right, someone from, let's say, a non-technical field just wipecoded an application, even though it's an internal application, they wouldn't necessarily know what type of guardrails to put in place so that someone from the outside can't really exploit, or someone on the inside can't really exploit your system and do bad stuff, right? So for someone like that, when when you when you don't know where's to where to start, probably something like Chat GPT or Cloud is like a really good starting point from for them to like go and see, hey, I I really white coded this. How do I like think about security? Like, do uh is there anything that I have to worry about? So that's a really good start. If the tools can help with that starting point, that's really useful. Um, I'll give you a quick example. So previously, like when I talk about scaling, um, even in the traditional uh parts of our experience, we had all these forms where we used to ask developers to submit these forms to see if a particular threat model review would require a manual review or not based on the risk involved, right? And it would just in have like a few drop-downs. Does your system interact with, say, restricted data? Does your system have authentication in place? And with just a standard list of questions, and you pretty much make a decision based on that. We are doing the exact same thing with the AI tool today. But what, in addition to that, what we are offering today is in the previous case, once you submit a form, people used to start writing down manually what threats could apply, what mitigation strategies could should you have in place and things like that, which used to be very tedious if you think about it, because you have to do that for every single feature. Now you have an automated report to start with, but that doesn't necessarily mean we just take all of that and just go with go with it, right? We kind of improvise that, we refine it, we apply priorities to it, we apply our specific company lens to it and make those decisions as we go. So, yes, long story short, um, I'm not a huge fan of just automating everything because we are going to miss that collaborative part of it, which provides a lot of value when it comes to business context, the history of incidents or whatnot, the patterns and all of it. But yes, tools can really help you scale. So it's a good starting point, but I think we still need to have humans in the loop as part of the process.

SPEAKER_01

Yeah, fantastic answer. I hate to do it. Our time is running short. This has been a fantastic conversation. Thank you so much for sharing your knowledge and opinions with all of us. Um, yeah, this has been great. Would you like to leave the audience with a takeaway? So think about our whole conversation here. Anything that you really want to stick out in everyone's minds.

SPEAKER_03

Yeah, yeah, absolutely. Um, so probably a couple of things that come to my mind. Um so I just want to say that threat modeling isn't necessarily predicting every single possible attack. It's basically just it's it's about asking questions as early as possible, just so we all can think about how some of those things could go wrong. And AI can definitely help us scale threat modeling, but we still need humans who would understand what the most important threats are to that particular product, feature, company, and what actually kind of matters based on the historical context and business context.

SPEAKER_01

So fantastic. Yeah, I think that's a good summary of what we talked about as well. So well, thanks again, Spon and F for being here. Umce again, this has been another episode of the Security Champions Podcast. Thank you so much for joining us today. We'll see you next time.

SPEAKER_02

The Security Champions Podcast is brought to you by Security Journey. Security Journey is an enterprise class secure coding training platform with lessons that are built on learning science principles to deliver long term, measurable results. Learn more at securityjourney.com.