The Secure Developer cover image

The Secure Developer

Latest episodes

undefined
Jun 16, 2020 • 25min

Level Up Your Security Champions With Yashvier Kosaraju

For this episode, we are joined by Yashvier Kosaraju, who manages the product security team at the ever-inspiring Twilio! Yash is here to share a whole load of insights and learnings from his career, with a specific focus on the 'Security Champions' program at his current company and what management means to him coming from a consulting background. We hear from our guest about the unusual path he chose to his career and how an interest in cryptocurrency led him into the security sphere. Yash does a sterling job of unpacking the way the different security teams are laid out at Twilio, their relationships to each other and the developers, and where the lines are drawn. Our guest gives us some insight into the work that he and the team typically do and some examples of their projects and there is also time for some philosophical musings as we talk with Yash about the importance of developer empathy for anyone working in security as well as the high value he places on listening as a means to improvement. The 'champion' concept at Twilio is really inspiring and the conversation covers how this actually works within teams and departments and the incentives and rewards that are offered for better security practices. Listeners can expect to gain access to a high-level and integrated systems approach, something that could be helpful to anyone in the space! Follow UsOur WebsiteOur LinkedIn
undefined
Jun 9, 2020 • 41min

DevSpecOps - Developing A Better Software Delivery Model With Alyssa Miller

On today’s episode, Guy Podjarny talks to Alyssa Miller, a security advocate who is here to talk about everything DevSpecOps. Alyssa begins by detailing her extensive experience from working in FinTech to becoming a penetration tester, security evangelist, team leader, and security consultant. After talking about her experience with app security, Alyssa shares her perspective of the tech world and the incredible changes that have emerged over the past two years, including the rise of cloud technology and the use of docker images. Then Guy and Alyssa talk about Snyk’s DevSecOps Hub — a tool that guides organizations in implementing DevSpecOps into their organizations. Along with theory on the topic, the hub is filled with practical advice as it relates to DevSpecOps culture, the ‘people components’ of a business, processes, and technology. The Hub also has a space for people to share how they’ve implemented and matured their DevSpecOps models. Throughout the conversation, Alyssa draws on her experience to provide insights on DevSpecOps, emphasizing the need for a model that integrates continual improvement, shared responsibility, and an aim for greater security. Follow UsOur WebsiteOur LinkedIn
undefined
Jun 4, 2020 • 34min

Open Source Security And Technical Management With Ryan Ware

On today’s episode, Guy Podjarny talks to Ryan Ware, a Security Architect and director of the Intel Products Assurance and Security Tools team. He has been at Intel since 1999, and has focused on product security for almost his entire career. His current passion is ensuring that developers at Intel have the right security tools in their hands to be able to quickly and efficiently understand the security implications of the choices they make in their daily work. In this episode, Ryan and Guy discuss open source security and how Intel deals with vulnerabilities in open source projects, the collaboration between security and development teams at Intel, and how COVID-19 has affected Ryans job. Ryan shares his perspectives on balancing management and individual contributor roles, some tips for that transition, as well his final advice for teams looking to level up their security foo. Tune in today! Follow UsOur WebsiteOur LinkedIn
undefined
May 28, 2020 • 35min

Container Security, Microservices, And Chaos Engineering With Kelly Shortridge

On today’s episode, Guy Podjarny talks to Kelly Shortridge about security, microservices, and chaos engineering. Kelly is currently VP of product strategy at Capsule8, following product roles at SecurityScorecard, BAE Systems Applied Intelligence, as well as co-founding IperLane, a security startup which was acquired. Kelly is also known for presenting at international technology conferences, on topics ranging from behavioral economics at Infosec to chaos security engineering. In this episode, Kelly explains exactly what product strategy and management means, and goes into the relationships and tensions between dev, ops, and security and how that has changed. We also discuss container security and how it is different from any other end point security systems, as well as the difference between container security and microservices. Kelly believes that we are overlooking a lot of the benefits of microservices, as well as the applications for chaos engineering in security. Tune in to find out what changes Kelly sees happening in the industry, and see what advice she has for teams looking to level up their security! Follow UsOur WebsiteOur LinkedIn
undefined
May 26, 2020 • 24min

Career Shifts And Holistically Managing Security Transitions With Dr. Wendy Ng

Careers often take interesting, meandering journeys and coalesce in unexpected ways. With a Ph.D. in Medical Genetics, today’s guest, Dr. Wendy Ng did not envision herself working in DevSecOps. However, she has combined her academic skills with technical prowess to now hold the role of DevSecOps Security Managing Advisor at Experian. We kick the episode off by learning more about Wendy’s diverse background, from her time in the lab to her first network engineering position and what piqued her interest in security. From there, we move to what she saw being a consultant, working across multiple industries. She realized the importance of not always chasing the shiny object and the research it takes to implement new security systems effectively. We then take a look at her time with Experian and what she’s gained from it so far. She has seen firsthand what it takes to manage security transitions holistically and shares these insights with us today. We round the show off by talking about the power of collaboration and knowledge sharing within an organization. Be sure to tune in today! Follow UsOur WebsiteOur LinkedIn
undefined
May 21, 2020 • 35min

The Rise Of HTTPS And Front-End Security Toolbox With Scott Helme

For this episode of The Secure Developer Podcast, we welcome Scott Helme to chat with us about front end security. Scott is the force behind Security Headers and Report URI and he is also a Pluralsight author and an award-winning entrepreneur! We get to hear about Scott's professional trajectory since leaving college, the interesting developments and changes he has made along the way, and his current work with his different projects. Scott then explains the service that Security Headers provides, something that he created to effectively scratch his own itch. The educational value it offers is quite remarkable and our guest does a great job of explaining exactly how it functions and its ease of use. From there he turns to Report URI and explains how this company compliments the services of Security Headers. Our conversation progresses onto the topic of HTTPS and the encouraging increases that have been happening for years now in terms of adoption and ultimately, security. This is something that Scott has been very excited about and happy to see, as it shows a general trend in the industry towards better, safer practices and standards. The last part of our conversation is spent with Scott sharing some thoughts on organizational approaches to security and what he sees in the near future for the space. For all this and then some, tune in today! Follow UsOur WebsiteOur LinkedIn
undefined
May 18, 2020 • 41min

Navigating The Terrain Of Shared Responsibility With Iftach Ian Amit

Today we have a great guest who brings battle tested perspectives on security from both inside and out, Ian Amit! Ian is Chief Security Officer at Cimpress and founder of the Penetration Testing Execution Standard as well as Tel Aviv DEFCON group (DC9723). Ian has worked on everything from pen testing to red teaming, risk management, research, and national security too. We kick things off hearing about Ian’s journey in the field starting out tinkering with computers in his early teens and working on application security in its nascent phase, before he moved into consulting and then went full circle from the vendor to customer side in his current position. Ian moves on to talk about his approach to vetting vendors in light of being one himself once, and the experiences he had working at Amazon of the difficulty of drawing the line as far as shared responsibility for security between cloud providers and clients goes. We then move to hear more about the mass customization services Cimpress provides before digging into their practices for offering custom security to their clients. Ian sheds light on the minimum standards Cimpress’s clients need to meet in regards to their secure software development practices and more. He talks about how Cimpress guides their clients in this manner using a secure SDLC framework and the ‘paved road’ approach, weighing in on how this is also expanding their best practices further afield. We wrap things up hearing about the challenge of finding metrics to measure their evolving systems, and Ian talks about their use the NIST Cyber Security and FAIR frameworks in this regard. Tune in for some brilliant insights from a man who has done it all! Follow UsOur WebsiteOur LinkedIn
undefined
May 14, 2020 • 39min

A Broader Cultural Perspective Of Cybersecurity And Digital Transformations With Steve White

In episode 59 of The Secure Developer, Guy Podjarny talks to Steve White, Field CISO at Pivotal. Steve spends his time helping organizations envision and implement new ways of integrating security into their software development, deployment, and operations life cycle. Most recently, his focus has been on cybersecurity, helping build a cybersecurity consulting practice for Microsoft and then leading security teams for companies such as Amazon, Sonos, and CenturyLink.On today’s show we talk with Steve White, Field CISO for Pivotal, where he gets to regularly exercise his passion for working at the intersection of application security, development, infrastructure, and operations. Steve spends his time helping organizations envision and implement new ways of integrating security into their software development, deployment, and operations life cycle. Most recently, his focus has been on cybersecurity, helping build a cybersecurity consulting practice for Microsoft and then leading security teams for companies such as Amazon, Sonos, and CenturyLink. Prior to joining Pivotal, Steve was the Chief Security Officer at ForgeRock. In this episode we are going to get a broader perspective from Steve on digital transformation within organizations. We also hear from Steve why he recommends making small incremental changes, we discuss the idea of a security champion, as well as the best practices for helping developers understand the importance of cybersecurity work. Finally, Steve shares more about how to recognize when organizations are having challenges with digital transformation, and why it is key to focus only on the actual threats and not the imaginary ones. So don’t miss out on today’s enlightening conversation with Steve White of Pivotal.Transcript[00:01:32] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Today, we’re going to get a bit of a broader market perspective here from someone who works with a lot of security and development through the years across the enterprise, and that is Steve White who is a Field CISO at VMware.Steve, welcome to the show. Thanks for coming on.[00:01:49] Steve White: Thanks, Guy. Thanks for having me.[00:01:50] Guy Podjarny: Steve, we’re going to go broad in a sec. But before we do that, tell us a little bit about yourself and your path to where you are today.[00:01:58] Steve White: Absolutely. Well, the first thing I’ll say about my path was, like many, it was accidental in a lot of cases. I started my career really honestly back before security was even a profession, the early security practitioners. We were sys admins and network admins and the people running the systems. We didn’t have things like firewalls and we didn’t have things like anti-malware software. We kind of invented this space, trying to protect our systems. The first firewall I ever used was a bit of software running on a Sun server.Fast-forward a career from there, I learned to really appreciate all facets of security during those early years. I moved into some application development roles. Ultimately, senior tech leader role and then moved into security full-time, trying to help build up a security consulting practice for Microsoft. Then from there, I’ve held a number of internal security roles at places like Amazon, CenturyLink Cloud, and Sonos. Then I was the Chief Security Officer at ForgeRock. Now, I’m a Field CISO at Pivotal VMware and spend my time really focusing on how can I best help organizations think through and strategize around this transformation into cloud native. How do we take what had become traditional enterprise security mechanisms and methods, and how do these change based on sort of this move to interesting things like containers and microservices and agile development? That’s why I spend my time thinking about and looking at today.[00:03:35] Guy Podjarny: Who do you typically work with? Who’s the peer in the companies you work with or maybe the profile of the companies?[00:03:42] Steve White: It has to be the larger global enterprises, so those companies who are primarily going through digital transformations. Companies who are writing a lot of their own custom code that they derive significant business value from, and they’re working to transform how they write that code from sort of the traditional monolithic waterfall method into now the microservice-oriented cloud native 12- factor apps, right? As those companies who are making that transformation because it brings business value to them.I'm working primarily with their security leadership and security engineering and architecture organizations.[00:04:29] Guy Podjarny: Within those organizations, within the enterprises that you work with, who is the sort of typical profile or role of a person who works with you on sort of understanding the security concerns? Is it more the CTO? There’s more security mind role. [00:04:44] Steve White: It’s definitely the security organizations. I have a number of peers that I work with who spend more time on the application development organization side of things. I focus almost exclusively on the security organization, so I spend my time talking primarily with CISOs, with director of information security, or sort of the leadership in engineering security architecture kinds of spaces. I spend much of my time there.Lately, I have been doing some more detailed hands-on security workshops with I would say representatives sort of from every security discipline in the company, so security operations, incident response, architecture and engineering. We’ll bring them all together in a room for a day and work through some of the implications of what cloud native security really means in each of those parts of the security team.[00:05:36] Guy Podjarny: Thanks for that. It’s sort of the context.[00:05:37] Steve White: Yup.[00:05:37] Guy Podjarny: Just to dive right into it, like you’re helping these organizations kind of keep security or level up even security as they do this sort of digital transformation and embrace all of these exciting new technologies. What do you see as kind of the pillars or the core tenets of change that they need to do?[00:05:55] Steve White: Well, it starts with – The first tenet of change in this space is that it's not just a technology change, although there is some technology shift that needs to happen. It’s a culture and a perspective change that ultimately is the larger piece of what’s to happen in information security, like it has happened in the rest of the business, right? We liken it to this change from security historically perhaps was perceived as providing perhaps gates, and you had to pass through the security gate to get to production or something like that.The phrase I like to use is we’re moving from gates to guardrails, right? So security’s function in the enterprise moving forward should be to provide these – They’re like the safety net, right? There's a top and bottom guard rail that would protect you from sort of exceeding really bad parameters but within those guardrails. Development teams, operations teams have the flexibility to move around, to fluctuate, to flex, and to experiment frankly with what they need to do. That’s one big topic. It’s just that it’s that cultural shift, that mindset.When you start to peel that back, how do you think about these culture changes, it really honestly comes down to – From my experience, it’s the idea of pairing, right? The key differentiator I believe these days in helping security transform into this kind of cloud native organization is pairing them with developers from application development teams and vice versa, right?Let's expand our knowledge, let’s expand our relationships, and let's expand our understanding of how this work impacts the business. I think that's like one of the really key factors.There’s a whole lot of technology that comes with that culture change too. "But without the culture and perspective change, all the technology in the world isn't going to make a difference."[00:07:55] Guy Podjarny: I’ve got like a whole like a series of questions now to just ask based on that aspect. I’ll start with that pairing comment. Pairing is a bit of a loaded term in the world of development you talked about sort of in those three program in pairing. When you talk about pairing developers and security people, are you literally talking about like two people watching the same screen and work together or pairing them to like a team?[00:08:15] Steve White: In an ideal world. Yeah, both in an ideal world. So I am ingrained in the Pivotal culture. I know you’re familiar with Pivotal and what we created, right? Pivotal is very big on extreme programming and pair programming and test-driven development and all of the things that go with that. I'm here because I believe pretty strongly in the value of those things, but not every enterprise is there, right? Not every application development organization, for example, sees pair programming the same way.When I speak of pairing, I would love to see it be in the true sense of paired programming where two heads sitting in front of one screen, working on solving one problem together. If one of those folks is a security engineer and one of them is a feature developer, they’re both learning a lot and adding a good chunk to the conversation. That doesn't necessarily work for every organization. If you're not an organization where pairing is a particular practice that you use, then you go along the line of things like rotations, right? Take a security engineer out of security. Put them into a feature development team for 90 days or let them be a part of that team and participate at whatever level they can and write code at whatever level they can and vice versa, right? Take a feature developer. Embed them in the security organization for 90 days or 180 days, whatever you can do.Pairing can look like a lot of different things depending on the what's appropriate to that enterprise. But it's pairing these folks who would not necessarily have been working together side-by-side. Give them common shared goals and outcomes for a period of time and let them learn from each other, right? That connectedness in that relationship I think is a really important part of that.[00:10:03] Guy Podjarny: I love the analogy to sort of extreme programming pairings. I think organizational pairing, that makes a lot of sense to me as well. That visual of sort of working together is beautiful one for the cases where that works.[00:10:14] Steve White: It is.[00:10:16] Guy Podjarny: Unpacking another piece that you said over there was this notion of guardrails. I’m a firm believer, right? Guardrails, you basically want to say, basically paint out the extremes about general paths kind of between past those elements. How do you help developers go make the right decisions in-between the guardrails? There’s still a range of security and decisions that you make.[00:10:36] Steve White: That’s right, yes. This is another key thing I think that's enabled by this idea of cross team sharing that I described, and that's in the modern sort of cloud native security organization. I say a good chunk of time used to be spent writing tools, writing code that the rest of the organization can adopt. Whether those tools be things like – Whether those are code blocks that enforce identity, authentication, and authorization in a particular way for the languages used by your company, right? That’s one idea. There’s a lot of other ways that security teams can write code, and that’s typically in the realm of either reusable objects that developers can embed in their project and use or it's in the realm of tools that help them integrate their methods and their procedures better with the tools that exist in the rest of the organization, where having security focused on writing code in those spaces I think pays big dividends in that question. How do you help developers navigate the guardrails ? You give them tools to do that. The security organization should spend a good chunk of its time creating tools and listening to the developers and making them better, so those tools better fit how the developers do their work.[00:11:57] Guy Podjarny: Yeah, for sure. I guess when you work, like you work with secure organizations that might not have had that approach before they started. I imagine like many times they’re not necessarily like the skills in the team without necessarily letting themselves doing such so coding activities. How do you see or maybe how do you guide organizations to sort of navigate that expansion or transition of skills? Actually, if you have any opinions on which skills can they actually invest less in as we move to this world?[00:12:29] Steve White: Yeah. I’ll maybe start with the first part of that question. I absolutely have thoughts on that. Everything goes back to agile. It’s about incremental change. The first thing I would speak to organizations making that transition is don't try to do the whole thing at once, right? Pick a particular area where you can make an impact, and you don't want it to be a low, quiet, non-visible area. Some people try to do incremental change. They’ll pick this little quiet part of the business that doesn't have a lot of visibility and impact. That’s actually not the right way.The right way is to pick a small piece of what you do that has lots of visibility, lots of impact and make a change there, right? Pick a marquee application that's being developed and have one of your security engineers working with that team or vice versa. Pick a particular problem area. So if you do a survey across your development organizations, if there’s a particular problem let’s say with how SQL authentication has happened with backend databases, that's visible and it often will cause security vulnerability alarms. So pick that one problem, and now go and write some code to solve for that problem.If you don't have any folks in the security organization who are developers, borrow some. This is back to that pairing idea. Bring a couple in from the application development organization and have them help you write tools that they and their peers would want to use, right? But do it in a paired programming model. So even if you're not big on paired programming, in this scenario bring one of your security engineers who’s eager to learn new things. Pair them with that developer in whatever way works in your organization. Let them learn from that effort. That’s like how an organization can get started like right away.The other thing I would be doing for any enterprise organization doing security today is I’d be hiring for these skill sets. As you’re hiring new people into the organization, this is a place you can hire junior people into security. Maybe they don't know a lot about security but they're pretty good. They’ve got some good development skills behind them. You can hire them into the organization, as most enterprise size security organizations regularly have openings that they’re trying to fill. Rethink about some of those open positions. Repurpose one or two of them. Bring in a developer heavy or even a developer with little security experience and then train them over time.The last piece of that that I would say is training, right? Find individuals inside your security organization who raise their hands and say, “Hey, I want to learn this new thing. I want to be part of this change,” and give them some training, right? You can invest in your people and sending a security engineer to training to learn some development skills that creates loyalty. It creates energy. It creates enthusiasm. It creates a whole lot of positive side effects that they can then bring to the organization. Those are like three straightforward things you can do. There's a variety of others.[00:15:37] Guy Podjarny: Yeah, I know. But those are great advice, both on kind of where you focus, which is pick one that matters and not the kind of hidden that you care about. It’s sort of the transitional sort of change and the pairing once again. I think that’s definitely kind of a strong theme and a powerful one in kind of the human aspect of sort of [inaudible 00:15:54]. All sound like really good suggestions. It’s going to be like a transition. We’re going to make a bit of a transition towards the world of dev, right? This is security, and they’re building and they’re building those kind of skills and tools with different approaches. How do you then advise that they engage with dev? I mean, how do you see the collaboration happening in terms of process and steps?[00:16:19] Steve White: There’s a lot of different ways you get after that. I'm a big fan. I've seen a lot of success in what I have historically called the security champions program, right? Security champions tend to be – It's like a way you help the development teams get invested and take ownership of security for their code, the stuff that they're creating. It’s difficult to try to train everyone all the time and get them really enthusiastic about security things, right? I think that's not effective for most organizations. It’s to try to get an entire development team jazzed about security.It’s like pick same ideas. I have one on the security side and on the developer side. Pick a person, one person that’s part of the team who has some enthusiasm for security, who has some understanding or some background in why it's important, and invest in them, right? Give them some additional training. Delegate to them some responsibility that perhaps the security team might've held within their arms previously. Find a security champion. Say, “Hey, we’re going to invest in some training and we’re going to invest in some responsibility, right? That now because you’re on the team, we’re going to take off the reins somewhat. We’re going to take off some controls, loosen the guardrails, and give them more flexibility within this operating framework, because you now have a security voice embedded within the team.”That person, because they are a day-to-day functioning member of the team, can find incremental ways to help the team make small changes or do small things a little better. I'm a big fan of that natural growth of security awareness inside of a team.The second part of that is frankly is to move security tooling and security validation earlier in their process, right? Well, I like to talk about this shift left thing with security. Although if you ever look at the DevOps lifecycle, it’s a continuous loop, so how you shift left in the loop is beyond me. But nonetheless, we still use that terminology. For me, it’s simply where do I provide security feedback to developers in a more timely fashion and in a way that's consistent with the way they work. That's always been one of the challenges is in security we’ll like run our SaaS and our desk tools somewhere down in the pipeline, and all we do is send them a report of a thousand probabilities in their code, and we’re done.That’s just not how you build those bridges, right? It’s not how you build awareness, and so finding ways to give those developers actionable real-time feedback as close to the time they write the code as possible and then making sure that when you're providing security-related findings or feedback or what have you, that it really actually truly is actionable. It's not fanciful. We have this tendency in security to sort of take the high ground, right? It’s like all the vulnerabilities have to be gone. There can't be a single line of misplaced code. That’s not really where we need to be, right?One of the advice I've heard from others who I respected in this space is pick a particular security problem you’re going to focus on, say, for a month and have all the development teams focus on that one class of vulnerability like SQL injection. We like to pick on that, because that’s a pretty bad one. It’s like all the teams focused on – We’re going to focus on SQL injection this month, and so we’re going to turn the knob up. We’re going to turn the noise level up on SQL injection this month, and we’re going to do everything we can. Then next month, we’ll look at something else. But engage the teams in that conversation about what it should be, how they should receive the feedback, what's best for them. If you actually engage the developers in that conversation, I think you will ultimately get better results. Those are some of the keys, yeah.[00:20:20] Guy Podjarny: All great advice and I very much resonate with some of the whole shift left on it to say it’s not shift left. It’s top to bottom, it’s to go from central governance to central controls and sort of bottom up and power teams.The question about the practicality of the security champions program, I mean, lots of good things to say about it. One of the pushbacks is that organizations don’t always acknowledge this developer, the person in the development team who’s now been sort of added in some form of authority, might have a different job. I mean, how do you recommend or sort of how do you see work best for organizations that do that security champion in terms of the role description for the security champion? Is it a percentage, less work that they do on the product side? I mean, how does that relate to their day job?[00:21:08] Steve White: Well, that's a really interesting question, and I think is also in some ways a cultural question, right? If you are trying to measure output of individual developers down to the level where they if they’ve spent two hours on security champion work, it would show up in some developer productivity metric. I think you need to question the cultural approach to that, because frankly, "output of development organizations should be focused minimally on the team output, right? There is very little external measure I would argue that is effective in measuring explicit individual developer productivity and especially in organizations where you’re pairing, because now it’s like, “Well, am I measuring the pair?”"Frankly, I would first say if you're trying to measure individual developer productivity at that level, I think you’d need to ask some tough questions about is that really effective and is that really the culture you want drive. If you take that up a level and you’re measuring productivity and impact, it’s really more about impact and feature flow and team metrics, right? Are they getting impactful things to the customers? Do the customers love what they deliver? Do they deliver frequently? Those kinds of metrics are what's important. I would argue that you're not going to see a big change in that by having one person spend some time on security champion types of duties.The other thing is that once trained – So there is a training period. Then during that training period, there may be some reduction in flow, but the responsibilities of the security champion are really just to speak up during planning or speak up during design sessions to ask the questions of the team. Did you think about this or did we include this or is this something that really should go through a deeper security review? It doesn't take a lot of work, right? It’s not a big investment of time. It's like a focus versus amount of effort kind of thing, so it shouldn't take a lot of time.[00:23:14] Guy Podjarny: That’s some great aspect. It kind of leads me to sort of another question I wanted to ask, which is like this measure. That team is also supposed to produce secure software and presumably whatever low [inaudible 00:23:26] provided by this sort of individual that might be helping also take some off or like helping others do their work more effectively. We’re going to kind of take that last step into the world of dev and ask some questions there. How do you see, again, kind of best practices around helping the dev side of the fence appreciate security work, sort of time times spent, effort made in security, in terms of like measurements, mandates? I mean, like these are enterprises [inaudible 00:23:57] small organization sometimes or small [inaudible 00:24:00] in values in change suffice. It tends to be that in the enterprise it needs something a little bit more structured. What works best?[00:24:09] Steve White: First off, I don't think that I've seen a one-size-fits-all for every enterprise, because honestly it is a very cultural perspective. Even within enterprises, there's a big variety in culture. But I will say that I think the most effective thing I have seen in terms of helping the developers understand the impacts and the importance of security is frankly the value of something like a pen test, but a pen test specific to the code they’re writing, because really a penetration test or a testing like that comes from the attacker mindset. What we’re really trying to do in this scenario is to help the developers really adopt an attacker mindset. It’s like if I was attacking this code, number one, why would I? Give them some really good illustration like if I break ‘this’, then I got to ‘this’. Now, all of a sudden, I copied a thousand credit card numbers or healthcare or health record pieces of information.Nothing I think reinforces that message better than that kind of effort, and that’s true from the executive tier and down, right? Like a good penetration test that demonstrates a chain of vulnerabilities carries powerful illustration.[00:25:26] Guy Podjarny: A sobering moment too. [00:25:27] Steve White: It does. Those things don't have to be external, so the other really interesting thing here is if you're building this kind of culture, is you can actually build a pen testing or a vulnerability assessing kind of mindset even within the application development organization, right? There's nothing that says that you might not take a Sprint and attack your own code or attack your neighbor’s code and have them attack yours. Actually, you spend some time having folks go out actually purposefully attempt to exploit their own code or the other team’s code. It really reinforces these things of, A, thinking like an attacker, understanding how these things can chain together and more importantly leading back to what's really important to the business.Every developer I've ever worked with, they care about the business. They care about what's important to the business. They care about their code doing good things for the business. If you can always tie these back to what data is being protected, how their code fits into that story, and if you can make this connection very visceral through pen testing and those kinds of things, I find that carries a lot of emotional value for the organizations.[00:26:42] Guy Podjarny: I love that as well. I mean, it’s like security to an extent is sobering, as well as fun. So security and risk is – This other one’s boring but sort of –[00:26:51] Steve White: I will tell people like the most fun I ever had in security is red teaming, right? That was my most fun assignment ever in security was being a red teamer or a black hat hacker.[00:27:02] Guy Podjarny: One more question on the dev side. We’re going to level up a little bit. What you suggested right now is good for like getting them engaged, kind of getting them alive. How do you measure if they’re doing a good job? I guess that’s just not the developer’s job but security as a whole. But how do you know that it’s working?[00:27:21] Steve White: There are I think various ways to answer that question. But ultimately, it comes down to me. Number one, are the risk metrics that you use as an enterprise, are they improving? Every enterprise has a set of risk metrics that they’re tracking as it relates to all parts of the organization, and so the way to look at this from the app developer side is a set of those metrics that apply to the custom developed applications and the platforms on which they’re running. One way of looking at this is simply talking about are those risk metrics going down or up. Whichever way they're going, having a pretty open honest conversation about what it is that's driving those metrics down or up. So that's one way.Another way to look at that would be in the – I'm thinking of the relational metrics of it, right? I'm big on actually having people who are working together, basically kind of rating that experience or rating their collective collaboration. I'm looking for a word here, but it's like how do I rate the effectiveness of my collaboration. That's ultimately what I'm after. I think –[00:28:34] Guy Podjarny: Almost like a sentiment element – [00:28:36] Steve White: Yes. It’s almost like an NPS. It’s like a net promoter score but it's internal, and so I would say you’re seeing success in this scenario if you ask the app development organization, “Hey, how good of a partner is security? How easy is it to follow the guardrails? What roadblocks are you seeing in the security processes,” and vice versa. You marry that up with similar questions from the security organization. Hey, how receptive do you find the app development teams? How collaborative is your relationship with them? How is the quality of the security in the custom-built applications going? Ask these questions on both sides of that equation and focus on sort of the collaborative aspects of it more than the “are the desk tools results going up or down in terms of vulnerability”. That’s sort of an irrelevant metric to me, frankly.The number of vulnerabilities found in the source code, well, that can change just as I release a new tool into the environment, and it skyrockets, right? So I think more about metrics that are more about collaboration, effectiveness, and then ultimately like what really is getting through like flow. How easy is the flow of applications through my security pipeline? If it's easy to flow things through that are secure, great. If it's not so easy to flow things through, even if they’re secure, that’s a red flag, and you can put metrics around that. You can measure it.[00:30:07] Guy Podjarny: That’s awesome. I definitely agree with kind of the sentiment of like it’s people first, and you do those sorts of just the metrics. But there’s a lot. I haven’t heard this sort of the survey idea or this notion of ask people thing if they are collaborating well or not.Before I kind of ask you for your bit of advice, just one overlaying question. You’ve been in security for a while and you’ve kind of seen it transition. You’re also advising organizations specifically on that transition from pre-imposed cloud native. It must have sort of developed more of a sort of – beyond the frameworks and all that, some crunch, some sort of sniff test about you’re coming in, some properties that you’re sort of seeing indicate, hey this company is probably not or is very effective in security. What would you say are like the key highlights that really kind of triggered that alert and how has that changed between kind pre-cloud native I guess, which arbitrarily affects the worlds into pre-cloud, post-cloud? [00:31:06] Steve White: There's a lot of things I would say around that one. The first and most important I would go back to - is there an ongoing collaborative relationship between security and the application development teams? Is that relationship through service desk tickets or is there an actual conversation happening on a regular basis? That, to me, is the key indicator of a problem. If there is not an active ongoing like daily or weekly conversation happening with active participation from security and app dev, then I think you need to dig deeper because I think that's indicative of a challenge. If all communication is through service desk tickets and those service desk tickets take three weeks to solve, I mean that's an indication of an organization that may have some challenges in these kinds of digital transformations. That’s the point of it, right. These kinds of digital transformations are all about speed and agility, so you only get speed and agility if you're having active conversation versus trying to communicate sort of asynchronously. That’s number one to me.Number two comes down to that conversation about gates versus guardrails. If I look at an organization, I can pretty quickly determine “do the application development teams have some guardrails to function within or do they have a bunch of gates they have to get through that I have to toss things over a silo wall to get security approvals.” That's really another really big indicator, right? Those are two key ones, and then there's a whole ton other ones like how you're giving feedback on your security testing, how effective is your security testing and how well tuned is it, are you providing developer feedback to keyboard entry layer, etc. etc. All of those are indicators, but the first two always come back to that. How do they communicate? How do they collaborate?[00:32:56] Guy Podjarny: They’re both great. Would you say like are they just as important in the sort of the pre-cloud era as they are post? Is it – How they’re useful here like when you actually not even recommend them if you’re sort of developing some [inaudible 00:33:12] applications.[00:33:14] Steve White: Yeah, but I’ll be careful. It's not so much about the word ‘cloud’, because the word ‘cloud’ can imply a move to the public cloud. It can imply a lot of things. I use the term cloud native, which is I like to define four key things. Cloud native architecture and organizations are defined by microservices. They’re writing all their apps as microservices. They’re defined by automated CICD pipelines. They're doing SRE/DevOps kinds of things and they're doing containers.Those four things really make up a cloud native organization. I would say that traditional enterprises who are not defined by those four things, if they're not doing containers, if they're not doing micro services, they're not doing automated CICD, and they’re not doing agile DevOps, the existing security mechanisms may work fine for what they're trying to achieve, because the existing security mechanisms kind of grew up and were developed in that world, and they work fine if you weren’t trying to make those kinds of transformations.But those are the exact things that put the existing security measures in a lot of pressure on them, right? When you're doing automated microservice agile code development, trying to release features to production, say, daily, the traditional code review, code testing feedback cycle just doesn't work. The short answer is it can work for traditional enterprise to continue to do security the way they have done it. Most enterprises that are writing custom code, they don't have the luxury to stay there. They have to move into this cloud native world in order to compete. They’ll become irrelevant if they don't.[00:35:00] Guy Podjarny: Yeah. No, absolutely. Fully aligned, and you’re right about this. I am sometimes kind of lazy and say cloud where they really mean cloud native in their approach but just the challenge of it. In that one, it’s like – For most enterprises, the switch to cloud native is hardly a one-time thing, they will have a for a long period of time a portion of their assets or kind of their technology stacks be cloud native and a portion remain in that traditional surrounding. Would you advocate using kind of those, if you will, cloud native approaches to security across the board or would you actually kind of bifurcate the organization to say it’s actually better for the sort of the traditional enterprises sort of stay where it was?[00:35:44] Steve White: Well, I would definitely suggest that it's best long-term to get the entire approach to security aligned with what I've described for cloud native. But if you have a bifurcated – Most enterprises are as you said. We’ve got a lot of existing things that we have to keep up and running and maintain in those kinds of pieces. For most organizations, this is not an overnight transition on the security side either, right? So I would suggest that it's best for those organizations to start the transition now. Get moving on transforming security just like you've got moving, transforming application development and have a plan and a strategy to make that transformation universal across all of security. But you don't have to do it overnight, just like you don't have to transform all of the application development overnight, right? Make it sensible for your organization. Make the right pace of change, something that the larger organization can consume, and do it in a planful way.Ultimate answer is you – I mean, this kind of transformation is great for all of security, but you don't have to artificially rush it to get there immediately for the whole organization. Do it incrementally, just like everything else.[00:37:04] Guy Podjarny: A nice kind of full circle as well is one of your first bits of advice. It’s kind of pick the area you’re going to start first. It’s very clear. This has been some jam-packed with great advice I think for the whole journey. But I’ll ask you for one more. As you know, I like to ask every guest that sort of come to the show if you have kind of one bit of advice or even like a pet peeve or something that kind of annoys you when people start doing if you’d like to give sort of a team that is looking to kind of level up their security too, what would that be?[00:37:33] Steve White: That would be focus on the actual threats to your organization, not the science lab projects that your neighbor has dreamed up. Keep your organization focused on combating those threats. That, to me, is like "my number one advised any security team. Focus on the real threats, not necessarily all of the imaginary ones."[00:38:02] Guy Podjarny: Excellent spoken like a person who’s had many conversation with enterprise security teams with advanced [inaudible 00:38:09] threat and nightmares –[00:38:10] Steve White: I’ve had lots of conversations with lots of really brilliant people. To be clear, there are organizations under some very, very sophisticated threats. I have a pretty long military career in cyber, and there are lots of really interesting threats out there. But a lot of enterprises out there today aren’t going to see those threats.[00:38:29] Guy Podjarny: That’s great advice. Steve, this has been a pleasure. Thanks a lot for coming on the show.[00:38:33] Steve White: Absolutely. Thanks for having me.[00:38:33] Guy Podjarny: Thanks, everybody, for tuning in. I hope you join us for the next one.[END OF INTERVIEW] Follow UsOur WebsiteOur LinkedIn
undefined
May 7, 2020 • 42min

Advocating For The Securability Measure With Shannon Lietz

In episode 58 of The Secure Developer, Guy Podjarny talks to Shannon Lietz, DevSecOps Leader and Director at Intuit. Shannon is a multi-award winning leader and security innovation visionary with 20 years of experience in motivating high performance teams. Today on The Secure Developer, we interview Shannon Lietz from Intuit. She is a multi-award winning leader and security innovation visionary with 20 years of experience in motivating high-performance teams. Her accolades include winning the Scott Cook Innovation Award in 2014 for developing a new cloud security program to protect sensitive data in AWS. She has a development, security, and operations background, working for several Fortune 500 companies. Currently, she is at Intuit where she leads a team of DevSecOps engineers. In this episode, she talks about the future of security and the progress the industry has made in closing the vulnerability gaps by, inter alia, maintaining continuous testing, ongoing production, and building sufficient capability within teams to know a good test from a bad one. But the problem is a long way from solved, and she shares with enthusiasm about the new buzzword called “securability” and how this measure can be standardized to uplift the security industry as a whole.Transcript[0:01:27.9] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Thanks for tuning in. Today, we have really maybe one of the originators, the pioneers of DevSecOps with us and really a bright security mind in Shannon Lietz from Intuit. Thank for coming out to the show, Shannon.[0:01:42.2] Shannon Lietz: Super excited to be here. I love this show.[0:01:46.4] Guy Podjarny: Shannon, we have a whole bunch of topics to cover. Before we dig in, tell us a little bit about yourself. What is it you do? How you got into security?[0:01:53.5] Shannon Lietz: Awesome. Yeah, I've been in this industry for over 30 years and that makes me a dinosaur, as I always say. I feel the placement journey on an ad is to really try and help the industry and take some of the lessons I've learned over that long career and really try to make a change. My goal at this point is really to make a dent in the security problem as a goal for my life and my career.As part of it, I got into this basically with lots of curiosity and didn't even realize it was a mostly male journey. Nobody told me when I decided that computers were fun. I learned through lots of hard knocks, but basically this wasn't a path carved out for women. I thought, “You know what? The heck with it. I always do things that people tell me I shouldn't be doing.” I started out with computers at a really young age and eventually, learned how to do some really neat things that again, shouldn't have been done.At the time, they called it hacking. I thought, “Well, you know what? I want to be a hacker, so cool.” Then eventually, it became illegal and I was like, “Okay, that's not a job.” My dad was horrified by the fact that this could be a problem. Eventually, it turned into actually it was a job. You just had to do it a certain way. That was the beginning. I mean, when I started in computers, nothing was really illegal per se. The Computer Fraud and Abuse Act was interesting and that shaped some of this industry.Along the way, there's lots of trials and tribulations. Yeah, I started there and I've been a developer, so I've written code. I'm so sorry to anybody who's still maintaining my code, God forbid. Then as you look back on 30 years, you’re like, “Wow, I could have done a lot of better things.”Then I got into the security and I've even done ops. I always said that if I needed to make money and pay my bills that I would ops for food, and so I ops for food. Then eventually, I smooshed it all together and created a term that some love and some hate and whether – here we are.[0:03:50.9] Guy Podjarny: Yeah. Definitely has become the terminology of choice, the depth of the – we had a rugged DevOps, we had also some variance, but it's very clear that DevSecOps is the term that emerged.[0:04:02.0] Shannon Lietz: That's cool, because I've got a new one coming.[0:04:06.0] Guy Podjarny: We’ve got some great further pioneering here to air on the show. Just a little bit from a companies and industries’ experience and so we don’t completely jumped around, like a whole bunch of things. I think right now, you are at Intuit, right? Before that, you were at ServiceNow?[0:04:23.9] Shannon Lietz: I was. I was at that wonderful other cloud company. I like cloud companies as they seem to be fun. I was also at Sony before that. I mean, my track record is pretty much financial. I did telco work. I mean, I've had about 22 companies that worked for in this period. I've been at Intuit now for almost eight years, which is the longest job I've ever had.[0:04:44.3] Guy Podjarny: Yeah. Well, definitely changing the streak here. What is it you do at Intuit?[0:04:47.9] Shannon Lietz: I run the red team here at Intuit. It's relatively large. I would say it's an adversary management practice. A lot of people think of red team as something that's relatively surprising. We put a lot of science behind our red team capabilities. We've really been working on moving it forward to adversary management and trying to make it so that we make this more scientific. I'm into a lot of math and science around trying to artfully measure the things that we all want to do, which is to make software more rugged and to make things more resilient, so that we can do the things we love, which is solve human problems.[0:05:21.6] Guy Podjarny: When you talk about red team, what's the – I like to geek out a little bit about org structure.[0:05:25.6] Shannon Lietz: Totally.[0:05:26.2] Guy Podjarny: What does that red team get divided into?[0:05:28.6] Shannon Lietz: I got to measure my red team recently to find out how many headcount I had. I was pretty surprised. We have about 53 people and we also just started a part of the red team in Israel. I've got four more people there that are doing red team. Actually, we've been pushing the bounds. We're applying more to application security and also, business logic issues. That's neat. I think that we're always the willing participants to emerge and try to innovate in a lot of different security spaces. I'm excited to see how that really advances us.My org structure, I have mixed threat Intel with our red teamers. Also, we have this other group that basically runs a continuous exploit system. Essentially, we built containers to essentially exploit all the things that we worry about, so people can feel pretty comfortable that things are going 24 by 7. Internally, yeah.[0:06:27.5] Guy Podjarny: Are these a known set of attacks? Is it more regression-minded, or is it more almost bug bounty? Like something that –[0:06:34.3] Shannon Lietz: Yes. I say that way, because it's a mix of a lot of things. Anything that could cause us to have a security escape is something that we put into this engine. The way that I tell it is if you can conceive of it and it could be automated, it should go onto our platform and that platform basically runs it across our attack surface, for our production attack surface, our internal attack surface.Not everything yet that I'd love to see it do but eventually my feeling is that that platform becomes really the way for us to continually level up against some of the exploitable surface. I think it's the way in which most companies are going to need to go and I think it's the path forward, is to really figure out how to do full resilience and regression testing against your attack surface for both the things that you know and the things that you learn and pull that information in to essentially get there before the adversaries do.The big mission is get ahead and stay ahead of adversaries and understand your adversaries for your applications. I think that people design their software, but they don't think about what the potential anti-cases are, or thinking about – I always say that security is basically a developer's edge case.The security edge case is really important, but a lot of times people don't have time for it. My job in my mind is to make it faster for people to think about the edge case that it’s going to give an adversary an advantage, to allow business to do what it needs to do and figure out where the risks are to help mitigate those risks, so that we can do the things that help solve customer problems. Instead of - everybody's been talking the road to no. I got to tell you, early in my career, I was the road to no. Everybody would rat around me. It was the coolest thing. I was definitely that little picture with the little house in the middle, or the little guard shack in the middle and the gates and the snow.I love that one, because it's always a reminder to me. I actually framed a copy for myself, so I could keep myself humble. Because the people that now I feel we support and subscribe to the things that we do to help them, they're coming, they're asking and that behavioral change was something that had to start in us first, in me first and then basically extends out to everybody that you touch and help.I think that being meaningful in their lives to try and help change how people think about things is actually the journey forward for security. For me, this adversary management capability has extended into things like we're using AI, ML. Now my team fully codes. When I first started this, I remember I have this really cool little – with DevSecOps, I have this really cool little presentation and I framed that for myself, because we do these things to remind yourself of where you came from.It had a snail and a hare in it and a little lane that I developed. I was trying to explain basically, this was the path to go faster in a secure way and a safer way. I'll never forget that, because I delivered it here in San Diego to the ISSA. It was a small room of about 30 or 40 people who had never heard of what DevSecOps was and they were like, “This lady's crazy.” I think it's been eight years since that talk. It feels like it just flew by and there's so many people now that you hear are starting to see more security in software that their products and services are getting better. Is it perfect? No. Have we taken a significant dent out of the stuff that was out there at one point? I think the answer is yes.I just saw some metrics from github about how the fact that they have vulnerabilities showing up and 20% of all the vulnerabilities that are showing up, they basically have seen that they're getting closed. That's in no small part to a lot of companies that are out there that are providing that information to developers, so that they know about these things, that they're not having to go figure it out on their own.I mean, for the companies I've worked for where that wasn't available, developers are like, “What should I worry about?” We're like, “Oh, we just need to go get CBSS for you and here's a set of a spreadsheet. Go figure it out for yourself, dude. Thanks.” I think that was a serious problem, because it inhibited their ability to develop safe software, because they didn't have the time to go figure out and crunch the spreadsheets. I mean, let's all be honest. That's basically a full-time job for a security practitioner. Something has to be able to build the software, so you can do something with it. From my perspective, there's a lot that goes into this.[0:11:00.9] Guy Podjarny: There's a bunch to unpack. I want to make sure I was taking you like a bunch of a subsequent questions. Let me backtrack a little bit. First on the – I love that notion of that continue effect. Yeah, this is something to use exploits of the elements. I've often thought about bag boundaries and the likes as I almost like the continuous monitoring, or chronic monitoring, or telling you if something is wrong.I think this type of internal system makes a lot of sense to ensure that the questions you know to ask at least alongside that the red teaming and the creativity get continuously asked and you don't go back and that you can go buy that at scale. How do you maintain that? Do you take feeds in from basically the botnets out there? Is it more about fixes or problems that before they have already seen in your surroundings? What would you say are rough primary ingredients of the types of attacks that get run?[0:11:55.1] Shannon Lietz: Oh, gosh. We take in everything. If there's no data set, I turn away, because honestly, there's always nuggets in everything. They always tell you like, no two scanners actually scan the same things. They never scan them alike the same way. I think people are really creative about how they test for security problems, so we take in any bit of data we can get. We've taken in stuff from a variety of product vendors who do scanning. We're looking at the build materials, companies all the stuff we can get from them. Anybody who is basically asserting a CVSS score, a CPE score, a score of any type that would actually reflect a vulnerability of a significant risk. All of those things are useful.To me, they're lagging indicators, however. The other things that we take in is threat intel. We're constantly looking for vendors and providers that have information that can help us get ahead. Why not be able to find the zero day, or what about signatures that are actually written against your company specifically? Why not harvest those, use them, learn from them and then replay them against your systems? Because essentially, that's a really great way to be able to build up your catalog of how to make yourself harder to beat from a resilience perspective. That took a lot of years to learn.I will tell you this is not, “Hey, by the way, this is what we're going to do,” eight years ago. It was a lot of trials and tribulations and my little sign on the back wall here that basically says, “Bang your head here.” It's been banged a lot of times. I mean, hey.[0:13:18.6] Guy Podjarny: You take all that information and then your team you said, codes. And builds it like this is an operational system that runs and runs those experts against production, or more against staging [inaudible 0:13:28.9]? How do you handle that?[0:13:30.4] Shannon Lietz: What is production? I mean, that’s really cool. We got we got rid of that in what? 1980? No, I'm just kidding. Production to me is everything. Nowadays, even development systems are production, right? Even a lot of these capabilities that are out there, they're significant in a way that you'd think. Pretty much at this point, if your developers are down and the productivity is lacking, aren't you down essentially?[0:13:57.2] Guy Podjarny: Absolutely. I love the approach. All these systems are production and they're all impactful to the systems. Oftentimes, the one of the concerns that happens when you run these continuous tests against the customer for saying, we’re moving to more production –[0:14:11.5] Shannon Lietz: Production?[0:14:12.4] Guy Podjarny: When you run it, there's always this fear of, “Hey, you're going to mess something up.” [Inaudible 0:14:16.2].[0:14:17.1] Shannon Lietz: Don't take production down. It's the one rule, right? That one rule. Don't take production down, which is why you've got to think about everything as production. If you delineate that this system is okay, but that system is not okay, to me you miss the major principle, which is if you're going to do resilience testing, you need to be mindful of the things that you're testing. You need to test your tests, right? That's a thing.You need to be able to build your tests in a meaningful way, not just throwing garbage at a system, but throwing something that's precision-oriented, that you're looking for a specific issue and that you're actually harvesting that issue as an escape. Not that you're poking around at it in a way that actually doesn't really provide that precision. My mindset about testing and production and resilience testing is that major principle. Everybody's always said like, “What are your rules for your team?” I'm like, “I have one rule. Don't take production down.” Because honestly, that's actually a meaningful issue for most companies, especially ones that are in the software industry.I think the second piece of this puzzle for us is build enough capability in your teams to understand what's a good test, what's not a good test, have that scientific set of principles about how you actually develop those tests to be able to make it so that they work in your organization. That's essentially why I think – I'd love to say that eventually, this trade craft will be able to be moved into the teams, that's possible and I think that as we commoditize in the industry that these tests that you could run are actually being built by external companies and there's ways to actually create them and they can be tuned and tweaked and developers could run them.I think it absolutely is possible for us to get to true DevSecOps, which is a developer can build safe software, operate it and they can eventually continually secure it and have it resilient against attackers. I eventually think that that is possible for an individual to be able to do those things, but not without assistance. It's not without buying specialty capabilities. We have to as a industry in my mind, be able to create that Nirvana, so that we're not also burdening people.What I would say right now is if you look at some of the surveys that have come out, the DevOps, DevSecOps surveys about burnout and some of those things, well, the problem - and I did a huge study on this - is we're not seeing enough investment in small businesses that are trying to solve the commoditization of security in the way that it's actually going to be meaningful. Because I'm not sure that people really grok the full problem space of making it so that developers could leverage these services and capabilities, so that they can do the work of integrating it, but they don't necessarily have to invent and understand every facet of it, so that they're the expert practitioner.Because, I just think that's what the difference is between having a security team that's off to the side that does it for people and having it be something that somebody can fully integrate into their workload.[0:17:19.4] Guy Podjarny: Yeah, absolutely. I love also, so you mentioned about how your team now codes and that was actually one of the other bits that really – how have you seen that skill set? This is definitely a forward-thinking approach and I see a lot of the guests in the show talk about how their teams today code. How have you seen the evolution there? What were some of the –again, you've been touting DevSecOps for a while. What was your timeline and your views of changing that skillset? Which skills do you feel are needed less, if you're assuming you don't just want to increasingly perfect individuals on the team to build –[0:17:55.1] Shannon Lietz: How do you trade it?[0:17:56.0] Guy Podjarny: Sacrifice more coding skills today.[0:17:59.7] Shannon Lietz: Yeah, exactly. How do you trade the workload of today for the workload of tomorrow? It’s definitely a challenge. I think when I first got started, I probably trivialized it a little bit, because I already had some coding skills so I was rebranding it to myself and realizing it's important in my life.At the time, as a oversight on my part to be so cavalier about it being less than difficult, because I think it is a difficult practice to be a developer. I think there's so many things to consider, like you're not just code slinging if you were. You're actually looking at the human problem and trying to find an elegant solution that can be easy for people to really embrace. You're lowering the complexity for them, right?When we first got started, I think it was like, well, Ruby's easy enough. Let's all do Ruby. There were some definite opinions about whether we would do Ruby or all the other languages of choice. Frankly –[0:18:55.3] Guy Podjarny: There hasn’t been [inaudible 0:18:56.2] languages.[0:18:57.7] Shannon Lietz: No, never. There’s never opinion in the bunch for that at all. I had a few people who could write some Ruby code and I had some people who do Java and this, that, the other thing. I think Ruby ultimately, because Metasploit was on Ruby and well, a bunch of people have done modules and things like that. It was just easier that way. There's definitely a lot of hacking tools that started out in Ruby that's migrated to different languages.Some of my team now does Python. We've definitely gone after different languages along the way. Some folks are doing Go. Everything has its place. When we first got started, it was easier for us to all go together on one language that was going to help level everybody up. Meaning, it was easy enough, it wasn't necessarily a compiled language. You didn't have to get onto all the harder stuff. We started with what I would consider an easier language to address. Some might actually find that to be different, right? They might say, “Hey, Ruby's not that easy.”I'll say that that was just a choice that we made together. It started with only a few people and obviously, now most of my team that codes. I can't even think of one person on the team at this point that doesn't code. If a manager has to do something quite often, they're breaking open a SQL query at the least to go run even a report as an example.Even the managers are finding themselves having to code. They're putting things together, snapping in APIs. That was a big thing now. The question is what do you really trade off? I would say and I'm going to say it, because I think it's really what does get traded off. I think your code migrates into from policy into code, and so you're not writing as many documents, frankly. I think that code that's well documented is really a wonderful thing. I don't think enough people put enough comments in their code at this point. I read code all the time and I'm like, “Could you just comment a little bit more? I don't know why you made that choice.”[0:20:48.9] Guy Podjarny: [Inaudible 0:20:48.11].[0:20:49.9] Shannon Lietz: No opinions. No strong opinions at all. Over-commented code is also a disaster, so I know. I would say where the industry seems to be heading is we're lightening up on documentation. There's reams of paper that are being saved and trees across the world that have been released from the horrible death of paper policies. I think that's actually where some of it's coming from.I also think that the other thing that is fueling the ability to migrate from one to the other is there's not as many meetings. It used to be that security was a meeting after a meeting after a meeting. The time that you were sinking into those things to go convince people and whatnot, it's for them to go do the work and you to manage them doing the work and all of that is basically being walked back to, “Hey, I have code that will solve that for you. If you could adopt it, that would be great.” Literally, I'm seeing programs being built by people who know what needs to go into them and that gets converted into something you need to onboard, so it's really migrating towards the – security is migrating to the way of microservices if you ask me.[0:21:52.0] Guy Podjarny: Yeah. Those are great insights, right? Fundamentally, if you build solutions, you build tools, you're a service provider. You don't need to be peeking behind people's shoulder all the time, which in turn in the form of meetings, or chasing somebody to read the document, will take up your time.[0:22:10.0] Shannon Lietz: Absolutely.[0:22:10.9] Guy Podjarny: We're building, like you've got this valley around, like we're evolving. Made all sorts of comments is all about they know, like maybe not quite in fact want it, but it's under evolution in the industry. What would you say today – you talked a little bit about DevSecOps, so if we cling to that term, what would you say are the biggest gaps? More like, what's rolling it out and rolling out the mindset, what areas do you feel we don't know to do yet, or people are especially resistant to?[0:22:39.4] Shannon Lietz: The stuff that I like to dig into. Over the years, there's lots of insights here. I would say that the biggest aha moment for me, the needle mover that's really starting to fuel people coming closer to a better state is having measurement. All the maturity models are right. It just takes a lot to convince yourself that they're right. I used to love and hate maturity models, because you're always writing so many documents to get to level three.I keep telling people, why do you need level three when you can get to level four? Which is really measurement. I would say that the DevSecOps thing, along the way the real challenge, like we keep saying culture. What I am finding and it's again, aha moment is it's really about how we talk about security and what it means to our businesses and having some of that business acumen as security practitioners is just missing in our industry.Now I'm spending a lot more time thinking about the business, if you were. What does it mean to have risk tolerance as an example? What security does actually thought about at the business level? The answer commonly is yes, most companies consider, especially public companies because they are required to report on significant changes in outages, especially if they're going to be materially impacting revenue and things like that. I would say that the business is definitely attuned to the fact that those are happening.I think the challenge is how do you actually take something that's non-monetary? You have things like fraud and other types of outages. They might be monetary. Some things are non-monetary. As an example, you might have an event that happens, an incident that happens. It takes time to resolve. You may have an investigation that you have to go do to make sure that nothing bad happens, right?The question is ‘is that something for the books?’ Is it in your risk tolerance thought process? I think that's something that DevSecOps needs to address. I think another couple of DevSecOps things that need to be addressed is where's the market? I mean, we really do need to commoditize. There are not enough capabilities and products out there at a significant level. The science of how you apply them, we just haven't figured out how to really get developers yet into the mix. My belief is that companies that are actually trying to solve the developer problem, being able to adopt, commoditize capabilities and services where you take security knowledge and capability and you package it all up and you make it developer-friendly, so they know where to put it in their CICD pipeline has a significant impact on making their software more resilient and the usage of their software pretty good too.[0:25:22.9] Guy Podjarny: Amen to that for sure. You and I have both talked a lot. One of the topics we’re excited about over a year is indeed trying to crack the measurement problem. You’ve alluded to a new buzzword, a new framework for us called ‘securability’. Tell us about it.[0:25:38.6] Shannon Lietz: I am super jazzed about it, because we put a lot of time and effort into sciencing the heck out of security, right? I guess along the way, I used to have other measurements that I thought I can get – if I could just teach a developer how to use this metric, it'll blow their minds and they’ll love security and I'll do something about putting security into that stuff. I guess I changed my mind about the quest and I realized, actually I need to figure out what developers care about, so that I can have them understand what security means to them, so that we can actually get them to address it as part of their process, whatever that might be, whether they're using CICD or they’re hand-jamming their code. I mean, there's a lot of different ways in which software gets built.Essentially, measuring the resiliency of software from a security point of view is essentially the craft, right? The idea behind a measurement that moves the world forward, I think is in understanding the behavior you want. In my mind, the behavior I want is I want a developer to be able to decide whether or not the security they have for their product is good enough. From my perspective, securability is a 59s measure, because if you're going to do anything, you make it 59s. I mean, I learned along the way. I work for a telco. You learn a lot about 59s and eventually, you get told 59s isn’t enough and you're like, “Are you serious?” I’m just going to go for the 59s. Honestly, if somebody can show me a 59 secured system, I would love it. It would be amazing. I would say so, right? The way we've thought about this is meaningful is that you can utilize securability at a very low level on a single component, a library even and you can also roll it up a little bit at a time, right?Being able to roll up measures, I think is also significant. That has I think a meaningful piece of the puzzle. From my perspective, securability, big 59s means that it's now something that you don't actually have to teach a developer what 59s means. You've already lowered that intensity of learning, right? Because you're already applying something that they're pretty consistent with.The question is then, what's the denominator, from my security practitioners perspective? Well we've all wanted to know what the bill of materials was for anything we work on. If you can imagine, CMDB and some other types of systems that are providing resource understanding for you. You know what your attack surface is. There's all kinds of companies out there right now that are trying to tell you what your attack surface is from the outside of a vantage point of an adversary, so that you know, like “hey, that's on the Internet. Did you know that?”People are like, “Oh, my God. I didn’t know that was on the Internet.” Honestly, I think those are amazing companies, because they're really solving the denominator problem of basically, figuring out what your bill of materials is. Once you figure out what your bill of materials is, then you essentially have the opportunity of figuring out all the known defects that are out there, that could actually have a meaningful impact on your attack surface. As an example, you might have a CBSS10 that's out there. That's going to apply to a handful of your resources maybe, or all of them.Say you had a million resources with the same CBSS book, that's a bad day, because that's a lot of attackable surface, right? Then the question is so what do you do with that? What’s the numerator on it? The numerator is the escape. I like to say that escapes are a variety of different things. I'll start with just a simple one, which is you got an internal red team, they pone you, they send you a ticket, in our case it's a P0 ticket. You want to basically take that P0 ticket over that splittable surface.If you only have one on all those different resources, that means hey, you're really firewalling great. You probably have a good zoning and containment. Fantastic. You've got some mitigating controls in place and you're one over a million. I would love to be one in a million, right? That would be amazing. Again, your securability is super high. One in a million awesome.Let's just say that you had a one-for-one problems. Let's say there's actually only one system out there that has a problem, but it's literally you're going to get an escape one-for-one. You have zero securability. That's a big problem. The question then is once you have that ratio, let's just say you have zero securability against that particular issue, let's just say you have a lot of adversaries that would love to come after you and they are and they're going after that specific resource with that specific attack. You're breached. That's essentially a very simple way of explaining security to somebody who wants to understand it, wants to do the right thing.I think that resilience capability is super important and exploitability focusing there, understanding how to bring your losses to bear. Companies all the time have fraud against their systems. They have security problems against their systems. The escape of the red team is one aspect, but you might even have losses you've got in your incident capabilities, right? If you can imagine, why aren't you putting your incidents over your exploitable surface, right? If you had 30 incidents in a month and you know they applied to some of your exploitable surface area, your exploitable opportunities, then essentially you had a calculation that said you actually had more risk and your risk was realized, right?I think that that allows us to have people really take responsibility and be accountable for the security that they're implementing or not implementing, right? It makes it so it's super easy for them to know on the face of it without a lot of interpretation or subjectiveness that they're either doing well there or not.[0:31:08.9] Guy Podjarny: Do you see securability as a measure that every organization then develops for its own surrounding? You need to add mileage, then look mapped out your security threats, say bill of materials and know more abilities. Something that is very clearly measurable, could also be like whatever, misconfigurations, right? We know buckets left open, or open access points. You do those and then you see the exploits and you see that become new backward calculate. I mean, that's I'm referring to is putting the time to invest in historically understanding the exploit surface you had, the incident, whether full-on groupers, or just forensics abilities and all that that happened on top of that then calculate that? Or do you see it as a standardized, this is how we can measure security. 59s for uptime are –[0:31:58.6] Shannon Lietz: I think it's all.[0:32:00.2] Guy Podjarny: They’re a standard metric, right?[0:32:01.3] Shannon Lietz: I think it should be a standard metric. I think you should have to put your bill of materials into your software, it rolls into a system, you have telemetry based on that bill of materials that helps you to understand your attack surface that you have testing that's going against to help you to monitor it. It should be a real-time system that helps you to understand how you're doing from an LED’s perspective on security and it's measuring your resilience constantly. If adversaries are measuring your resilience too, then it should help you to find those problems as well.I also think that you should be able to leverage that same methodology to go backwards, looking and figure out like, hey, do we miss something? To your point, could you hand calculate it? Absolutely. It'll be really easy if you have a bill of materials. Then going forward, you should be able to forecast it. What I like to say is that when somebody designs a system, they should be able to understand their bill of materials and where they think that there might be adversary happenings, so I could imagine in the future we're going to find a company out there that's going to say, “Hey, we're monitoring your bill of materials and we actually see adversary interest in these key areas of your bill of materials,” so your likelihood if you have resiliency issues in those areas is very high that it's going to be a problem for you specifically.I do think the way in which it's been invented is really important about it being specific to your company, but I also think it makes things shareable. If I wanted to share information with another company, I should be able to share the securability information in a reasonable way without necessarily telling somebody all the bits and details of my security program. Hopefully, that's also helping people have the conversation that says, “Hey, yours is 99.9%, but mine's 97% because we don't see the same adversaries as you do and the amount of adversaries that we encounter is much less.”People are having those risk-based conversations in a meaningful way at a business level, because really, this isn't just the software developers, but it's also to solve for people that have to have those conversations, where you're not talking about hey, you're not doing it the right way. The how isn't the thing of focus anymore. You're actually talking about the why and the what, right?You're really getting into the business level conversation of what is your measure? Why is that appropriate? If you can build trust on that why and what, because that's where you build trust, you don't build trust on how, you build trust on why and what, then you can actually create a meaningful ecosystem of people who are doing the right thing for the right reasons with the right intent, so that you can establish a much bigger barrier against adversaries.[0:34:40.9] Guy Podjarny: How do you see – I mean, I think the idea is compelling in the sense, what will aspire to the measure of how secure you are, or securable you are maybe in this term. How do you meld in, I think the bill of materials of the known components, while there are some disagreement in the industry should have some factual elements, or you were using this component who has this known vulnerability. How do you see a custom vulnerability that are also security risks that related fit into this probability in your code, or a misconfiguration?[0:35:13.7] Shannon Lietz: I love that conversation. It's not a different score. All the same. I'm so tired of us talking about whether it's in the library, outside the library, upside down from the library. Who cares? It is all part of the bill of materials. If you have a configuration, it's part of your bill of materials. You configure it a certain specific way to work with your software package. We really need to focus on the bill of materials standard that says, this is actually if I had to look at your system, rebuild it, whatever it might be, I could actually have information that suggests what risk you took and why.If you wanted to leave open port 80, I shouldn't have to find it out from some scanner out there in the world. I should know your intention was to leave open port 80, or it was a mistake and you're taking accountability for it. You're having a system that even knows that your intent was this design, so that bill of materials is actually also about your design constraints and your design intent is really important in my mind.[0:36:09.3] Guy Podjarny: In this model, the more detailed your bill of material, to an extent if you provided more information, you might actually get a lower score. You're not tricking anybody with your own. It's your own system you're trying to do it. The more information in it, the more accurate your score is, whether it's higher or lower. Is that –[0:36:26.6] Shannon Lietz: That's right. Well and in addition, you benefit from providing a much more accurate bill of materials, because the downside to not doing it is that adversaries actually find it before you do, before your friendly partners do. It would be much better to be accountable for good security than to find out from bad guys. From my perspective, it's only benefit to be able to identify these intents and design, so that you can actually route it out. I think that's about the principles of resilience, right? Is we all want to be resilient.If we're afraid to actually put this information in because we might be judged by it, I think I would rather be judged by an internal friendly red team adversary than to be judged by an external unfriendly adversary who's going to cause your company to have challenges, right? From my perspective, they're very different.[0:37:20.1] Guy Podjarny: Yeah. Very well said. Have you been experimenting with the securability within Intuit? Are you using that measure?[0:37:26.6] Shannon Lietz: Yeah, absolutely. We've been working with it directly for about a year and a half, and so we've got lots of information data. We've done a lot of work with it. I would say in the initial states of doing anything different than the rest of what everybody else does, your why is so important. Honestly, I started looking around in the industry and I questioned a lot of the things that were out there, because they just weren't solving some of the problems.I believe securability will eventually lead to the capability of us all automating it and even making systems be able to do self-resilience. If you have a good intent and you can do resilience measurement, eventually we might be able to automate risk most of the time, right? Automating risk and complexity, I think is a right thing to actually chase. I was looking at most of the things that were out there, most of the frameworks and there's nothing to say that they're bad, because I actually think most frameworks are pretty awesome that somebody even tried it in the first place.I don't see anything that's really solving for that notion of automating this, so that it can actually be done by a system and it can be something that can be a support system for your developers. From my perspective, that was the why. I think at Intuit, we've done a job to basically try to always be better than we were last year at everything that we do. That's a wonderful aspiration and I love the mission.From my perspective, securability has become a thing. Is it in its final states where we fully mature on it? No, we're not. I am definitely interested in the things that we have ahead of us, because securability is worth it. I think that solving for these problems, there are no small feat because just like DevSecOps, what securability is missing right now is the companies that are going to help create it, change it, commoditize it, make it easy to digest, make it consumable.If you look at the availability market, that's what securability could be for our industry is you look at the billions of dollars that have been generated by monitoring and availability capabilities that are out there and there's a real market opportunity to be had around trying to bring this to bear for our developers.[0:39:30.9] Guy Podjarny: Yeah. I love the idea. We talk about its effect on [inaudible 0:39:34.0] more measuring security, because it is about capturing the full more than security, but also specifically security related information, from configuration, to dependencies, to known flaws, to various other elements within this bill of materials that moves around. Then you're able to layer on top of that all the known attack surface, security flaws that you have.Then once you do those and you measure it, because DevSecOps follow through the opposite of that. One of the core principle is you can’t measure it. If it moves, measure it. If it doesn't move, measure it in case it moves, right?[0:40:16.0] Shannon Lietz: That's right.[0:40:17.4] Guy Podjarny: Doing with that and not doing it in the world of security. Would definitely be keen to see it evolve and definitely build there on our end.[0:40:27.1] Shannon Lietz: I'd love that.[0:40:28.6] Guy Podjarny: I think this is – we can probably go on here for –[0:40:32.3] Shannon Lietz: For hours, probably.[0:40:34.3] Guy Podjarny: An hour longer, but I think we're probably a little bit over at already in time. Before I let you go, Shannon, I like to ask every guest that comes on – anyway, you’ve already given a whole bunch of advice, but ask for one more bit, which is if you have one smaller bit of advice that you can give a team looking to level up their security foo, what would that bit of advice be?[0:40:56.3] Shannon Lietz: Yeah. Somebody who’s looking at, but to look up their security skills and try and up-level, I would say the one question you should ask yourself is how many adversaries does my application have? Because it's the curiosity around that question that will lead you to better places. That I think just having that goal of trying to solve that question will lead you down to find people that you can contribute to, or collaborate with that will help you answer that question.I think once you do answer that question, it's mind-blowingly obvious what you have to do to fix the problems that might actually be in your applications and in some of the code that you are writing.[0:41:35.6] Guy Podjarny: Very cool. Well, definitely sound advice focus. Shannon, this has been excellent. Thanks a lot for coming on the show.[0:41:43.3] Shannon Lietz: Thank you.[END OF INTERVIEW] Follow UsOur WebsiteOur LinkedIn
undefined
May 5, 2020 • 26min

Integrating Security Into Development With Neil Drennan

Many banks are still running on decades-old sets of legacy technologies, but the security and performance advantages cloud-native systems offer is changing that. Today, we’re going into the future of banking technology with Neil Drennan, CTO at 10x Future Technologies. His firm is building the first cloud-native banking platform that can be used by large-scale banks in order to solve the cost and security related problems caused by their legacy systems. Neil fills listeners in about his role in the overall mission at 10x before diving right into the topic of how they integrate security into their development practices. Often security and development teams find it difficult to integrate into each other because they are kept in separate silos from the outset. Things are different at 10x though as Neil explains, talking about the back and forth conversations between his different teams and their use of vulnerability dashboards to keep things transparent. Neil weighs in on the necessity for 10x to get security right, but the benefits of working with banks as clients because of their high level of insight into potential threats. We hear all sorts of amazing improvements for threat monitoring that cloud-native solutions can provide, making the legacy moat model look outdated indeed. A key takeaway from Neil today is the importance of building security into development from the ground up, so tune in to hear how he manages best practices at 10x.10x is looking for more talent to join its team with roles in the UK in London and Leeds. You can see their latest roles here Show notes and transcript can be found here  Follow UsOur WebsiteOur LinkedIn

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app