The topic of application security has never been more important. So how are companies approaching appsec? What should companies do to ensure appsec gets the attention it needs? For episode 61 of Cyber Security Sauna, Antti Tuomi, who works in Japan and Antti Vaha-Sipila (known as AVS), from Finland, joined the show to share their thoughts on changes in application security, shifting left, supporting developers, “level boss testing,” and much more.
Janne: So guys, throughout your career, what have you change in terms of how application security is regarded by organizations and how they tend to approach that?
Antti Vaha-Sipila: Well, at least for me, the one interesting trend that I’ve seen is the rise of threat modeling. 10 years ago if you looked at how companies were doing security, very, very few companies said that they do threat modeling, which is a sense just thinking about risk beforehand, before starting implementation. But now it’s not rare at all. For example, we help a lot of people with threat modeling, getting the kind of wheels of the ground, so it’s not rare anymore.
Antti Tuomi: I think it’s kind of interesting also, speaking about this with AVS here, because you have a lot of experience in enabling security and development, as part of a development organization. Whereas during my history in security, I have mostly been on the security auditing, security testing side of things. There has been a lot of change on both sides of that spectrum.
When I was starting in the application security industry or business, like back in the late 2000s, usually we would do a security test as ordered by the customer. And then we would find some vulnerabilities. And that would be the norm back in the day. So of course, cross site scripting, SQL injections, all those things were reasonably new.
And usually when we held the meeting about, okay, here are the results, usually the first thing would be like, “Well, is this really an issue? I mean, we have SSL, so does it matter if we have these scriptings and these injections? Our database server is in the intranet, you can’t access it, like this isn’t exploitable, is it?” So we had to go through that whole process of like, okay, which part in the security stack takes care of actually which layer of security.
And that is definitely, fortunately, one of the things that I do not remember having to have that discussion in the last 10 years now. So I’m very happy about that.
This must have been around the same time when somebody asked me that if SQL injections are a thing, why are people still using SQL? So like, do you feel that companies are more aware of like what these security sort of basics mean, now that we’re having less of those conversations?
Antti Tuomi: Definitely. I think the overall security awareness – security not just being antivirus and firewalls, but also being how you build the applications – that is now a piece of common sense, that yes, we do need to also protect our applications, not just the networks and the endpoints.
Antti Vaha-Sipila: Right. Antti, when you say that you haven’t seen that sort of stuff for a while now, do you think that the root cause is just about awareness? Or is it about frameworks and architectures getting safer, or something like that?
Antti Tuomi: That’s actually a very good point. And I think you are absolutely correct in hinting that frameworks and everything do have a big part in that. Because if you consider, for example, the web application development languages and frameworks in like late 2000s, early 2010s, you would still have a lot of applications written in PHP, just vanilla PHP. And if you didn’t know that SQL injections are an issue, then how would you prevent them?
And nowadays, a modern MVC framework with an ORM layer on top of it, it is actually more difficult to write an injection flaw than it is to write like a normal safe query. So I think you’re definitely right about that.
The reason I asked was because the other big trend or big change that I’m seeing is really, especially in greenfield development when you’re doing something completely new, the architecture nowadays tends to be like a cloud native architecture, so we see a lot of APIs, even in microservice architecture.
And I think that that does carry a significant security benefit, because APIs are much more restricted in what they can accept. For example, you don’t have to do that sort of page-building on the server side anymore. And so, I think that is the major contributor, from my perspective, to the situation.
So do these frameworks and new methods of working, do these mean that we’re actually making software secured by design more, or more easily than before?
Antti Tuomi: Not necessarily by design, but the basic layers and premises of security are often abstracted away. So that the average developer does not have to think like, okay, with my MVC framework and ORM framework in 2021, how do I write an SQL query? What type of a join do I need to make? Instead you just query for objects in the code and you just get it.
So it’s completely… You don’t need to touch the internals. And that definitely helps in also making sure that you don’t touch the internals in the wrong way.
Antti Vaha-Sipila: Yeah. And if you spread your architecture in a microservice way, for example, the database might be owned by one single microservice that only does its thing. It doesn’t do anything more complicated than respond to well-formed queries, or HTTP and JSON. Even if somebody would compromise your application, they wouldn’t be able to, for example, escalate their access to the database that would reside on the same box, for example.
So, in a way it is like security through design, but I think that security is one of the many side effects of the new design. It’s not necessarily why things had been designed in that way.
Okay. So you think there’s still ways to go for companies before they’re in that fully sort of secure by design mode?
You could say that now that we have this sort of a robust architecture, then this is what we mean by security by design, and then you could, I don’t know, I mean, you could just paint white and call it white.
(Laughing) Fair enough.
Antti Tuomi: One of the things that I find intriguing is that, in the late 2000s, OWASP Top 10 was kind of a reasonably new thing. If I remember correctly, the first edition came out in 2004, I think, and the next one in 2007. And one of the interesting things that I think about the change, or the lack of change, is that OWASP Top 10 is still quite relevant. And I think especially over here in Japan, and I think also in Europe as well, we still very often get requests like, could you test whether our application has any OWASP Top 10 vulnerabilities?
So, although the standard is there and it has changed a bit, but the basic premises are still there, and there are a lot of still very similar or the same vulnerability categories. It’s still one of the base resources, like, do we have OWASP Top 10? That’s what we want to know.
A couple of years ago there was a stat in a black hat presentation, I saw that, one in five developers have never heard of OWASP Top 10. Is that your experience with developers these days, and their sort of level of security knowledge?
Antti Tuomi: I would have to say that, at least based on my experience in Europe, I would say that most developers were already familiar with OWASP Top 10, like in the 2010s and going forward. And I do not remember having to introduce OWASP Top 10 to anyone like within the last 10 years. And at the moment, over here in Japan as well, it seems to be a well-known basis for application security.
But at the same time, I do also find that there’s a lot of cases where the customers focus so much on the OWASP Top 10 that they forget everything beyond it. For example, XML external entities were not necessarily directly a part of the OWASP categories, at least not at all times. And in many cases, they might forget that that’s actually something we need to look for. So there’s more to it than just the OWASP Top 10.
Antti Vaha-Sipila: OWASP Top 10 is a very wide generalization. I mean, everybody, every single company, has a slightly different Top 10, if you’d really look at the types of vulnerabilities they have.
When I started at a software company as an appsec person in 2011, the first thing I did was I just got all the reports from the past three years, and went through them – security testing reports, I mean. And I just categorized all the findings, and tried to figure out what’s the type of a flaw or bug that we are seeing in that company. And I’ve got to admit it was really not OWASP Top 10. For example, database injections were not a thing at all, mostly due to the architecture that they had adopted.
Antti Tuomi: So you’re saying like the Top 10 for a bank, might not be the Top 10 for an IT service provider?
Antti Vaha-Sipila: Well, definitely not. And even within the vertical, it depends on the company and the type of architecture, type of software stack that they’ve chosen. I mean, if you are going to do old-school PHP, you are more vulnerable to types of things than if you’re using a very mature framework.
So I don’t know if it matters that much, that the developers definitely have to know OWASP Top 10. They have to know what are the specific risks for their type of architecture and stack.
Sure. I mean, if you know the exact situation in your organization, you’re always going to be better off. But if you have no starting point at all, I would argue that the OWASP Top 10 is better than nothing. Like if you don’t have that company-specific information, start from this.
Antti Tuomi: I mean, OWASP is well known, but AVS, have you seen or heard of any other like upcoming resources that especially security-aware companies refer to nowadays, in Europe?
Antti Vaha-Sipila: Those companies that are looking to integrate at security earlier on in the track development process, many of them talk about these software maturity models. Like one is from OWASP SAMM, the current version is not that bad anymore. And there’s the BSIMM model from Synopsys, ex-Cigital, which is probably more well known. I think that those are the things that companies would actually mention nowadays, when talking about this.
Antti Tuomi: That’s interesting. One of the topics that often nowadays comes up when we talk to these modern application developers, who are running on, for example, AWS cloud infrastructure, and running full DevOps cycles…One of the things that we often get asked is like, okay, is there something we need to do to secure our DevOps pipeline? So I’m kind of wondering when are we going to get the, not the OWASP Top 10, but the DevOps Top 10, list of things you should be doing or should not be doing.
Antti Vaha-Sipila: Actually, I’m going to guess that that already exists in at least three different companies’ marketing materials. I mean, that sounds like something that you do if you’d have a tool.
(Laughing) Yeah, that makes sense. But like, OWASP is still like the most widely known out there, and the new Top 10 list is out this year, some changes in there. What do you guys think about new items on the list? My particular favorite is insecure design.
Antti Vaha-Sipila: Yeah, speaking about tools. I mean, OWASP Top 10 has been used a lot for like classifying findings from automated tools. So, good luck automating that one.
Antti Tuomi: Definitely. And although, like I definitely think that that category like belongs there, insecure design. For example, if you expose your internal VPCs in your AWS cloud infrastructure, or you expose for example, your S3 buckets to external users, that might be like insecure design or insecure configuration. So, it definitely does belong on the list, but the instructions on, okay, how do you check for this? How do you assess this? How do you fix this? Well, design it better.
Yeah. It’s a bit of a catchall category, and the main reason for security mistakes is that people make security mistakes.
Antti Tuomi: Exactly.
Antti Vaha-Sipila: It’s very hard to operationalize that. So, I mean, on the privacy or data protection side of things, there are many checklists. How to process personal data securely, do this and this and this and this. And then, as the last one, hey, remember, you have to be compliant with regulation. Why don’t you code already?
Exactly. Yeah. So, okay. So, what kind of issues are you guys then finding most when you’re looking at applications? What’s your personal top 10?
Antti Tuomi: I think kind of a pattern that has been continuing for me for maybe the last five years or so, since 2015, is that technical vulnerabilities are getting harder and harder to find, especially when it comes to the traditional, cross-site scripting and SQL injection, types of things. And that is likely very much also due to the frameworks in place.
But one trend at least, that I personally think still applies is issues related to access controls. For example, being able to see another user’s data, or being able to change some parameters in like an API request, just being able to access a feature or data that you should not be able to. Those are still reasonably common.
And of course, also, those so-called business logic vulnerabilities where you skip a step, or you skip the payment altogether and still get the product. These kinds of, utilizing the flow of the application in an unexpected way, are still fairly common.
On the other hand, what’s kind of interesting I think is like, what are the types of issues that development organizations are struggling with when it comes to these application infrastructures? What types of threats catch the developers and the product owners off-guard?
Antti Vaha-Sipila: I’d say that supply chain issues are a really a big thing. If you’re doing threat modeling, you tend to find more architecture and design level issues, not like implementation level issues. Unless you’re doing retroactive threat modeling use cases, you can actually find bugs.
But a typical pattern maybe for a finding could be that you have an assumption that a component works in a certain way, or comes from a trusted source, when in fact it doesn’t come from a trusted source, and it doesn’t exactly work in the way that people assumed it would work. That causes a security issue, or a potential weakness at least.
Antti Tuomi: I guess you’re talking about, for example, these dependency confusion attacks and-
Antti Vaha-Sipila: That is one type of it, yes. Then maybe like also, if you call an API, what sort of assumption it makes for the inputs, can you really trust the data that it’s sending back? Because for example, it might be that you are the first one to actually have to validate the data you get as a response, even though it is coming from your internal API. But it might be that nobody else has actually validated the data before you. So things like this.
Very rarely, we’d had to touch the types of technical vulnerabilities that are mostly in OWASP Top 10. But now that OWASP Top 10 has this design level thing, then it’s obvious we can claim that we are also doing OWASP Top 10 stuff.
We were already touching on sort of automating OWASP Top 10 testing or whatever, and that’s certainly a trend I think I’m seeing in the industry, towards sort of the commoditization and automation of application testing, rather than, sort of, a human crafting a test per solution. Do you think that’s happening? And do you think that there’s a concern that…like I would think that a machine is not going to be as adaptable as a human being. So is that going to be an issue?
Antti Tuomi: There’s a couple of very interesting cultural or regional anecdotes I have about that. So, when it comes to the Japanese culture, the definition of quality, what’s expected of tests, including security tests, is that they are well defined, repeatable, and kind of defined beforehand. So there’s actually something that’s often kind of expected from security testers like us as well.
And basically, the ultimate goal, if you are able to define those tests, have well-defined test cases and all that, then in that case, the test should be automatable as well. And I think that’s a very good thing to aim for, in the sense that all the tests we can automate, we should be automating, because then it’s repeatable, there’s a lower chance of making a mistake when performing it and all that.
However, at the same time, I do think that security is also about exceptions. And when it comes to exceptions to the rule, there’s like innumerable number of test cases we would have to perform to cover all test cases. And in order to, first of all define like, okay, how do we test for insecure design? How do we define a well-defined set of test cases for that? Well, it’s not going to happen.
And in that sense, the strive for automation is very good, but I do still think we need this explorative testing, and expert review type of work as well.
Antti Vaha-Sipila: Antti, we discussed the other day about maybe the cultural differences, on where companies assume that exploratory security testing should be taking place, and that’s an interesting thing.
Antti Tuomi: Definitely. When companies are asking us for a quote for like security testing over here in Japan, we usually say that, okay, we will do also like testing for business logic vulnerabilities, and access rates and so on. And some of the customers are surprised, because they don’t see that as part of security per se. Instead, that is part of exploratory testing, to be on the responsibility of the quality assurance team to verify. So-
Antti Vaha-Sipila: But it should be, I guess, in some sort of an ideal world.
Antti Tuomi: Yeah. And I think you’re actually right about that. So, some customers think that security is about the technical security issues and technical configurations, whereas all the access rights, and features, and explorative testing should be a quality assurance thing.
And like you said, in an ideal world, that should be the case. But that’s a very interesting cultural difference that I’ve found being here for the last five years.
Antti Vaha-Sipila: Yeah. I wanted to hear that story, because the crystal ball is hazy, but I have a feeling security testing will have to divert a bit to the like automated, fast testing that you need to do in a CI/CD pipeline for example, and towards the exploratory business logic type of testing.
And if you really think about it, if you have a QA organization that can actually plausibly do that sort of testing, it would be a great thing to do it right there, because they have the most understanding of the business logic of the software anyway. So for example, in threat modeling, and for those companies who have exploratory QA, those people are almost always the best ones to find esoteric risks in the system.
Antti Tuomi: And I would definitely welcome that as well, because let’s say, compared to the situation where kind of, we as security testers would go in and do threat modeling on our own, and come up with these attack scenarios, part of them like related to the logic, part of them related to the technology.
Then if I was able to start testing on an application where I could have a session with the QA team, and they go like, “Okay, in threat modeling, we identified that it would be bad if you could, for example, skip this step in the payment process, or if you could access this information. And we already tried all of these and these tricks.”
That would be super valuable information for me as well, where I could just basically corroborate their results as well, and maybe add some technical ways with which you could also try doing these explorative test cases that they came up with. I would be super happy about that.
Antti Vaha-Sipila: And also, the QA people typically know about the regulatory requirements, maybe better than the…Well, usually developers do know about the most common ones as well, but I mean, for a specific type of company, a QA team might actually have more acute knowledge of what it actually takes to be compliant with something.
Antti Tuomi: Definitely.
Antti Vaha-Sipila: So they could see the knock-on effect of a security bug, how it affects the compliance status.
Antti Tuomi: Like AVS, like you have also done, I’ve also performed a couple of workshops on threat modeling for development organizations. And I do have to say that often the development teams and developers have very good technical input, like, okay, what if this API message, what if the contents are changed? Or what if our implementation is wrong?
But it is often one of the older QA dudes, who, when you kind of start going through the threat modeling with a couple of examples, then you prompt them like, could you come up with something that could go wrong? And those guys are often the ones that, when they get the chance to kind of get going, they just keep on pushing out these really good threat scenarios that nobody else, not the technical people could they even think of. And that’s always a very nice moment when it happens during like threat modeling, whether it’s a training or actual session.
Antti Vaha-Sipila: Yeah. So back to Janne’s original question on test automation.
Antti Vaha-Sipila: And I think that there is actually a trend now that, because if you have a continuous delivery, so essentially, code is being deployed into production automatically after it’s complete, then that kind of implies you also have to have continuous testing.
There’s a buzzword that’s been thrown around for a while called guardrails. So you build kind of a funnel that is kind of flanked on both sides with guardrails. So, this is kind of trying to convey the idea that you are moving with great speed, so you have such a high velocity, that in order to keep you in the pipeline, you need to build guardrails that keep you going straight. If you are moving really fast, if you have high velocity when developing and you need to be deploying to production all the time continuously, the idea being that if you make a mistake, automation has to catch it, so that you don’t break your production environment.
For example, the cloud environment’s going to be defined by some Terraform code. So, it’s basically one commit away from breaking your whole production environment. So you have to have sanity checks, you have to have some sort of security checks in there.
And I think that the focus of automated testing would be moving into that direction, in the medium term, so there is need for that sort of a tooling that can make sanity checks, and cuts most of the low-hanging fruit, so to speak.
Antti Tuomi: Definitely. And if we rule out the purely application-level vulnerabilities, I think the second very large category of vulnerabilities that I’ve personally witnessed recently is vulnerabilities that are caused by misconfiguration of your infrastructure-as-a-service cloud platform. So, all those exposed buckets, or other types of issues.
I remember recently seeing an article about automated tooling for finding these kinds of configuration issues and vulnerabilities, in an automatic way, in infrastructure-as-code deployments as well.
It seems to me that there are these new security actions coming up, but also that, some of the old things that we used to be doing in security and software, is now finding different places in the life cycle than it used to hold before.
Antti Tuomi: I think you’re correct, especially in the sense that, for example, antivirus or firewalls, they’re not gone either, they’re still there, but we are just…We have grown so used to them, that it’s kind of second nature, or it comes quite granted to us, that we need to be paying attention to those.
And I think a lot of the other application security tasks also have become more kind of common knowledge, and something that we are now used to doing. So definitely they are still there, but instead of testing at the end of the release cycle, we’re now maybe doing it slightly earlier. And I think AVS you had a good term for this testing at the end of-
Antti Vaha-Sipila: Testing at the end of the cycle, yeah, I called it level boss testing. If you remember the old shoot-’em-up games where there was this large enemy at the end of a level, and you had to clear that before you could move forward.
Antti Tuomi: Yeah, raid boss, level boss. So it’s a security test, and if you fail and your party wipes, then you’re back in the development phase and it’s go back and collect the souls that you dropped.
Antti Vaha-Sipila: Exactly.
You guys are talking about a trend that’s generally called shifting left the security. What do we mean by that?
Antti Vaha-Sipila: It may mean a bit different thing for different companies, especially, I mean, if you are geared towards DevSecOps, it might mean that you kind of add a bit of tooling before deployment, which means that each time you deploy, you get to do some testing before it gets to that end of level boss site, so to speak.
But you can also think about – sorry, left means left on the time axis, basically. And you can go even farther left. So if you do threat modeling, for example, before you do design, then that is even one step farther left, and then you can go even more left, if you consider, for example, product management. And when you’re considering the epics or the business value increments that you’re going to work on, if you do some security work there, then that’s kind of as far left as you can get time-wise.
You have a spectrum of leftism that you can apply in your organization, I guess.
(Laughing) There you go. So you’re moving security to the earlier parts of the software design or software development lifecycle.
Antti Vaha-Sipila: Yeah.
Antti Tuomi: I think these, like adding automation, or adding security-related test cases to the testing, or maybe even threat modeling, are things that many of us might be familiar with, but could you give some examples of how can you affect security at the product designer or business decisions phase?
Antti Vaha-Sipila: Well, I think that privacy or data protection, it’s a good source for good examples where it actually matters, where you can actually do things early on. I mean, if you need to figure out whether your application is going to be compliant with whatever regulations or laws you have to follow, that discussion probably should happen, or it better happen early on. Because if you spend time building that, and then you find out that you’re actually build an illegal application, then that’s not a good place to be, right?
Antti Vaha-Sipila: So at that point, you could extend that discussion. If for example, if you’re doing a privacy impact assessment or data protection impact assessment as mandated by GDPR, or something like that, you could add onto that some initial design discussions.
And one thing that I’ve seen in real life is that you’ve got some legacy backend that holds onto your personal data. And you have to index the people in that database in some way. And for example, here in Finland, it’s very typical to use a social security number, or we call it a personal ID number, to do that. But that is a unique identifier, but it’s not random, and the problem is if it leaks, it’s not a good thing, because you can use it to index the same person over a vast array of other services as well.
So, for example, a design decision that that legacy backend will be also changed to introduce an application-specific, unique, random identifier that you could use in this context. So in this case, you can get rid of using that personal ID number for indexing altogether. But that requires changes, not only to your new application, but also to the legacy application.
And because of the lead times involved, it would be a much better thing if this would be raised already, like in the business discussions when discussing about these things.
Antti Tuomi: So, kind of like a bit of foresight in security architecture can save you a lot of trouble going forward.
Antti Vaha-Sipila: Yes. So it is, in this case, this is kind of an information architecture-level decision, almost.
That makes sense to me, but it seems to me like, let’s just shift security left is easier said than done, is that the case? Or are organizations managing this pretty well?
Antti Vaha-Sipila: It depends on the culture, really. It might be that the security function wants to pull security farther left on the timeline, but the product development organization would also need to want to play ball.
In many companies, the security function is still somewhat separate from the product development organization. Usually the security function is something that has been grown out of the IT security function, and IT being kind of support function to begin with. So, you would have to get the buy-in from the product management organization, in order to be really effective in getting that pull towards the left.
I know that many companies start from the very end of the deployment pipeline, for example, and start the kind of left-leaning activities right there, by introducing automation and stuff, because that can be done by maybe development teams themselves, or a platform support team that provides common tools for everybody. And that doesn’t require that much organizational renewal.
Antti Tuomi: Yeah. There’s a couple of, now that you mentioned it, a couple of quite interesting cultural aspects over here as well. So, one of the things that I have personally seen as one of the biggest drivers for shifting left when it comes to application security, maybe the biggest driver, both in a good and a bad sense, is the timeline of deploying or publishing an application.
So, in order to avoid wiping at the level boss, and having to go back to the drawing board, and make the decision of either like publishing a vulnerable application, or having to shift the timeline forward, which is bad for the business. And that decision, or that like looming level boss in the horizon, you can see the signs that, okay, here’s a safe point, maybe we should be starting to do something about this.
So, that looming sense of dread about a security having a business impact, is in a way, a healthy and unhealthy driver of trying to do security as early as possible. So, I think there’s a lot of organizations that have realized that, okay, are there some security tasks that we can do earlier, so that we can make sure that we are ready for the deployment on time?
Yeah. That makes perfect sense to me. But do you guys think we’re supporting these developers enough in sort of building a secure software? Or are we just saying like, in addition to everything else you’re doing, better make it secure?
Yes. Well, that’s something that I kind of feel strongly about, in the sense that, what we shouldn’t really be doing, we shouldn’t just tell the developers that this is how you should behave. I think that the organization requires the incentive and especially time allocation to do things properly.
So it’s not the developers that would be in my focus, it would be really the product management, product owners and everybody else who own the resources.
Antti Tuomi: That’s very also tied into the…the other cultural difference that I wanted to point out is that, unfortunately, many Japanese organizations still don’t have a dedicated CISO, a security officer, or necessarily a dedicated like IT security, or security team. So what’s common is to have a security operations center that monitors logs maybe, and kind of monitors alerts from IDS devices, or endpoint protection and so on.
But an all-encompassing team that would be able to advise on, for example, application security, does not necessarily exist for many organizations. Instead, the security requirements and the actions often are actually on the responsibility of the development organization.
Not having a security function to rely on is not necessarily a good thing. But at the same time, often the development organizations might have more say into how and when do they include security in the tasks. So that’s an interesting cultural difference.
One interesting result from that is also that when security is owned by development organization, if they have people who are passionate about security, or they have some kind of spark, or they learned about it and they got interested, then often those people are able to drive security parts of the development process forward. And that often results in very good results, I think.
Antti Vaha-Sipila: Yeah, I think many companies that I know they have this sort of a security champion program, where they have like a volunteer, either a developer, or sometimes even like a Scrum Master type of person, who then takes security seriously, and they want to keep the flame burning.
It’s interesting that I had a discussion with one senior QA person, and they said they’d be interested in actually extending their remit into doing security testing from like normal QA. Not only because they’re interested in security, but because it’s also something else to do in a while. I mean, if you’ve been looking at the same functionalities for a couple of years, it really frees your mind when you get to think about things in an adversarial way, for example.
Antti Tuomi: It’s very beneficial for the organization as well, because having these external security testers and so on, come in at specific times and they bill you for money. So, instead, being able to kind of also internally take part in this and get employees interested in security, and maybe have these champions who have the spark. I think that’s a win-win for everyone involved.
Antti Vaha-Sipila: Yeah. Just look a bit more widely at the people you have in your organization, it doesn’t have to be a developer. If it’s a scrum master who likes to listen to Janne’s podcast, that might be trust the person you need.
We like those people.
Antti Vaha-Sipila: If you are an in-house security person, I think it would be very healthy if you’d have a very good understanding of all the other qualities that software needs to fulfill. For example, performance, cost, and time to implement.
A security person who’s never done any commercial software development might not have the immediate understanding how difficult it is to make a small change. I mean, it’s a 15-minute job, yes, but all the extra bureaucracy, and testing, and everything else, is going to compound into that.
So definitely, we should staff internal security functions especially with people who have previous software development experience in some way or another.
Antti Tuomi: Yeah. And it’s actually, I think it’s also very common for people, let’s say technical security enthusiasts, or experts to, at the beginning, if they find vulnerabilities, feel like this is such a simple mistake, why did they make this mistake?
Whereas, the actual reason is that the development team might not have enough time, or enough resources, or they knew the vulnerability was there, but the application was published anyway.
There’s a lot of these things that are very easy to result in this kind of like adversarial feeling or approach, that in reality just should not be there.
Antti Vaha-Sipila: Yeah. You always have to remember that, at least this is my opinion – in most cases, it’s much easier to find a security bug than to get the whole thing to work in the first place. And for example, I think the job that software developers are doing, it’s much harder than the one that I’m doing at the moment.
That’s what I always tell clients, that we have the luxury of always thinking about just security, whereas developers have to focus on secondary things like functionality, and usability, and features, and stupid things like that.
Antti Tuomi: And okay, does it work on the latest version of Chrome? Okay, good, but does it work on the latest version of Internet Explorer? What about your LG or Samsung smart TV? Does it work on that one? Now there’s a browser on Xbox, so can you play web games on your Xbox browser?
Now guys, to finish off this conversation, I wanted to ask you, in your work with organizations, what’s the advice you find yourself giving over and over? Like, what’s the one thing you wish people remember about application security?
Antti Vaha-Sipila: Well for me, it’s easy to answer that one. So, almost every single organization that I talk to, I end up preaching the same thing. So security work doesn’t happen if it’s not given time. And with that, I mean that-
Antti Tuomi: That’s a very good point.
Antti Vaha-Sipila: …it has to have an explicit time allocation. It’s not enough to just stick it on some sort of a definition of done where it’s kind of hovering there, or if you just like-
Antti Tuomi: Like there’s an architecture diagram, and there’s like all these lines, and then there’s like a completely detached box that says security.
Also include security.
Antti Tuomi: Yeah.
Antti Vaha-Sipila: Yeah. “Remember security” on a post-it note on a monitor, that’s not going to do it. It has to be an explicit time allocation. And in most organizations, what that means is that security needs to be ticketed on the backlog, just as any other development activities would be ticketed. And then, how to get that done, that’s not necessarily a straightforward thing, because-
Antti Tuomi: Yeah, that would’ve been my next question.
Antti Vaha-Sipila: …because that then requires that buy-in that may stretch back to product management and even further than the immediate product owner. And that’s kind of taking those left-leaning steps much farther than just the CI/CD pipeline, for example. But that’s the thing, time allocation.
Antti Tuomi: Should security be an epic on the board? Or would you attach security to individual tasks? Or-
No, Antti, that’s going to be an entire episode in itself, let’s tackle that another day. What’s your advice for organizations?
Antti Tuomi: That’s a really tough question that I don’t have a good answer for. It really depends on the organization and the parts that they’re struggling with. But what I can say is that, I think in my experience, the companies and organizations who embrace security as not just a compliance thing, but more of have found the internal people who are like willing to advocate for security during the design, during the implementation phases, I think those organizations are the ones that end up having the least problems to fix, and end up finding the time and the budget to fix things.
Fair enough. Well, with that, I want to just thank you guys for being on the show today, and walking us through the jungle that is appsec.
Antti Vaha-Sipila: Thank you. It’s been great.
Antti Tuomi: Thanks Janne, thanks Antti.