Cyber crime is a constantly evolving game. As soon as new technology is introduced, attackers start figuring out how to exploit it for malicious purposes. No one understands this better than F-Secure Chief Technology Officer Christine Bejerasco. Christine joins episode 59 of Cyber Security Sauna to discuss the fast-changing world of cyber crime, and how companies can avoid having their new technologies exploited by taking a secure-by-design approach.
Thank you for having me, Janne.
So, can you give me some examples of where newly introduced technologies soon become used for malicious purposes?
This is actually an interesting observation, because I was looking back from a historical perspective, looking at when did Microsoft Word, for instance, start to introduce macros? And then from what I can dig in the interwebs, firstly, it was 1993 that they started to introduce macros. And then the first proof of concept malware that uses macro in Microsoft Word came in 1995. So essentially, you have now a platform, Microsoft Word, which is actually a tool, that became a platform as well for automation. Which is very good. It’s very good to have macros, that helps us. But of course, that platform then also became a platform for threat actors to perform malicious activities. And even today, it is still being used by threat actors as well.
So that’s one example. And then another thing is the app stores, which, if I remember correctly, were first introduced by Symbian, essentially. Before Symbian, we had mobile phones that were, well you couldn’t really tailor them to your taste as a user. So you didn’t have your own applications, but you had to be stuck with messaging, alarm clock, and then Snake. (Laughing) That’s what we had then.
And then you had Symbian, which, now there’s new apps that were released and everybody could use them. That was 2003. And then by 2004, there was the first Bluetooth worm using Symbian, which was Cabir.
So technology comes in, and then it also becomes a platform for threat actors to perform malicious activities.
So I guess that makes sense. As our understanding, as users, on the technology evolves, attackers evolve in their thinking as well. They understand the technology better, they’re able to use it for things that the designers maybe didn’t think of. So what do we do? Just stop introducing new technology?
We would be in the stone age if we actually do that. It’s not really about not introducing new technologies, because for instance, if no one introduced electricity, then we wouldn’t have the internet that we have today. So whenever we introduce a new technology, we create sometimes not only a singular technology, but a platform for others to build new technologies on top of. So you have electricity, and then you have computing, and then you have the internet, which, all of these are platforms that enhance our lives. I mean, we can do remote work because of all of these different technologies that we have.
So I’m definitely an advocate for more technologies, even, to be introduced. But it’s really the thinking, that whenever we introduce something, how can that be misused?
Whenever we build our own homes, it’s not that we build our house, and then we have front doors that don’t have locks in them, and then later on we realize “Okay, wait a minute, I’m living in a bad neighborhood, let me add a lock to my front door.”
If we can sort of think of building software like this, that at the beginning, we already think of like, “How do you build security into this software rather than bolting it in later on?” That you build the software that is secure by design. And I’m not saying that we’re going to be perfect at this on the first try, but there is this concept now of what we call “shifting security left.” Where security should be as close, not just to the development process, but as close to the design process as possible. Like performing threat modeling sessions, for instance, that how can a certain capability, a certain API, a certain functionality can be misused? And then thinking of how to mitigate that.
Okay, so in the past we had security teams weigh in at the very end of a software development project. You’re now talking about moving that to the left, to the earlier phases of the development cycle, so that we’re thinking outside the box. Not just looking at the features that we intend the software to have, but also, what can somebody malicious or very creative do with this software?
Exactly. And I think that is the best that we can do at the moment. Because in this world, we are adding more complexity on top of already complex systems. And what I mean by this is, we have our endpoint operating systems, Windows, Linux, Mac, et cetera. And we have our mobile operating systems. And then to add to that we have cloud platforms nowadays. And we have an organization that has hybrid implementations of this, so they have multi-cloud implementations, they have different platforms that they have implemented.
And it can be mind-boggling, to be honest, to secure all of those different areas that an organization is now owning. And what is the responsibility that they have? Where does it end, versus where does the responsibility of the cloud platform owner begin?
So these things nowadays, they’re not so easily defined anymore. And therefore, if you are somebody who is leading the security team within the organization, it’s quite natural and understandable to be confused about this.
So the question is how do we end up helping these people, in order to make it a little bit easier for them? Because the technologies that they are handling, even the old ones, they are not really getting deprecated. So for example, how many messaging platforms do we have today? And have we even deprecated emails which are probably older than me? So these are the things the security officer, or a CISO in a certain organization, are being challenged with. And how can then the technology creators or the businesses help these people?
But what’s the developer team to do? There’s only a handful of them and there’s an infinite number of hackers out there, just clawing at every new piece of technology. They seem to have an infinite amount of time to nitpick every bit of your software and find all the vulnerabilities. So what are you gonna do?
I believe that the developer teams, on their own, would be challenged without the support of the business. So I can give one example. For instance, when we are building new technologies, the first thing that we really want to come out, as a business, is the functionality of this technology. And one of the things, unfortunately, that gets cut, are the security capabilities that should come in, hopefully during the first release.
So whenever the development team is already pressed for time, and they have this priority queue, the security items go out so that they can go to market on their target time. So my hope is that if the business truly understands how their technologies can actually be used as platforms for attacks, then they would take, hopefully, more responsibility for this. We need to improve this, and we need to allocate time for the developers, such that in the beginning, even in the first release, and I am now probably dreaming, but even in the first release, there is already security capability that at its very core, this product is hopefully secure by design.
And then I would also like to encourage not just the businesses, because security, adding this to products, unfortunately comes with a cost. It’s a cost of developers’ time – they need to implement this, they need to think of several angles on how this product can be misused. And of course, the company as well, if there is no security knowledge in-house, then they need to work with red teamers, for example outside the company, on thinking of how can this product be misused? Or risk management consultants, for instance. And that helps release a product from the beginning that already has security by design. But of course, with a cost.
So the hope is that some legislation or some government support for instance, for businesses, and some incentives, whenever there are businesses who are building capabilities that are secure by design, could also be available to those businesses. Because I do believe that with the right incentives, this could be good for the business, from a monetary perspective, and of course, if the business dictates that when we release this product, this will not be a product that will be used for DDoS attacks, for instance. And we will do our best to do that.
I think I’m with you, but let’s unpack this for a minute. What does it mean in practice for developers to make products that are secure by design?
Secure by design, of course, starts with the thinking when you’re still architecting, let’s say, the capabilities of the product. Maybe one pragmatic example is when someone is designing an API. And that API is exposed publicly, or through a certain portal that somebody could still access and then authenticate with.
The hope that I have is that the thinking of the developers would be, “Who are capable of accessing this API first? And how do I make sure that authentication for the access of this API only works for those parties that are allowed to access them?” Most of the APIs, of course, are publicly documented. But “how can I make sure that, for example, the elements of this API, or the capabilities of this API, will not then be misused or accessed from, for example, geolocations that it’s not allowed to be accessed from?”
So these are the hopes that I have, that even before somebody writes the first line of code for this capability, that people will then be thinking, and simulating, and hopefully even doing tabletop exercises and threat modeling, essentially, that how would this capability then be misused?
Yeah, what could go wrong?
Exactly. And in addition to that, so once it goes out there, of course – we can only think so much during threat modeling. And once this capability is out there, do we even have monitoring on how this was already used or misused? Because without that, we will be left to the mercy of those who are trying to probe for vulnerabilities, like of our capabilities. I mean, we’re lucky if they are the white hat hackers, and they report those capabilities. But if they’re not, then that’s the time that the product starts to be used for malicious purposes.
This makes perfect sense to me. So I guess the real question is, why isn’t this happening? Is it hard for development teams to justify the extra effort and the extra cost? Or why isn’t this happening?
I think we try to put sometimes the pressure on the development teams, but to be honest, they are also getting pressured by the product management, the business, essentially, who needs to deliver the actual capabilities. So the cost factor is really one of the issues.
And if we do some root cause analysis, why is the cost factor such a big thing when it comes to security? It’s because there’s not a lot of incentives for businesses today to release products that are secure by design. I mean, it’s upon them if they want to be diligent and send out these capabilities that are secure by design, but then they also need to fork the extra cost.
So for really big companies, they can do this, because they have security teams and they have bigger revenues. But if we’re talking about a startup…I’m a startup who likes to create baby monitors with something novel, and then I put it out there, of course I would need my baby monitor functionality first. And then with the rest of the money that I have, I probably won’t have anything extra to invest for security.
But if, for example, my company gets some tax breaks, for instance, if I do secure by design, then that is something that I could factor in towards my finances, that I could say that, yeah, we should definitely do it because it makes sense from a business perspective. Security makes sense, even from a business perspective.
Now, in a market economy, these sort of problems are supposed to be fixed by the demand side, sort of consumers demand better baby monitors. But it’s really hard, when you’re looking at a baby monitor, it’s really hard to tell if it’s made properly or not. There’s all sorts of padlocks and ‘uses internet technology’ and other kinds of vague technology-sounding statements that I don’t really understand as a consumer. So how fix?
Well, I guess in a market economy, if the demand doesn’t go towards me buying a more secure product, and if we just let the market economy dictate this, then unfortunately we will be decaying towards the point where we have cheaper products, they may work as expected, but they’re performing denial of service attacks towards companies like Netflix and whatnot.
That sounds like what’s happening.
Exactly. So that’s already what’s happening today. So we are in the state where we have created this problem, essentially, because of security being treated as an additional cost. If we want to secure all of these different platforms and technologies that we have built so far so that we can build more on top of them, then we need to make sure that the foundations that we are building are solid enough.
Yeah, absolutely. It sounds like you’ve given up on sort of educating the consumers and think that we’re not going to fix that issue this way. Is there something we can as a security community do about this, sort of make it more transparent so that it’s easier for people with no technical skills to understand, sort of what good and bad looks like? Is there anything we can do?
I’m not sure if this is going to be a very popular idea, but one of the thoughts I’ve been having is that if we can sort of promote or praise products that are really trying to build security into their capabilities, of course we probably will not find a product today that doesn’t have a vulnerability. But there are products that it’s getting harder and harder to find vulnerabilities in their platform, and as such the vulnerabilities in their platforms are becoming more expensive.
So celebrating those companies as well could be helpful in this regard, especially if they are companies who are not the big ones. And especially if you can see that they are trying to make a living, of course they are trying to make a profit with their products. But still, even with the challenges that they are having, they are working continuously on building security into their capabilities. I think those companies should be celebrated. Because for one, they don’t even get incentives, for instance, from governments. And they’re doing it anyway. So I think it’s very responsible and those are the type of companies that I think we should elevate. And even help out more with.
Yeah, I like this idea. When we’re doing, for example, supplier audits for our clients, and looking at you know, these three providers, and helping them choose which one has the best security, we sometimes come across companies who are really putting in the effort, and then the other two might not be. So I like the idea of giving kudos and praising the company. Like this company’s really trying. Like you said, nothing’s bullet proof, but they’re actually trying, they’re putting the effort in.
So what about some of the other things that these companies can do to help development teams produce better software, for example? Do things like bug bounty programs, do they help? Or are they sort of too far to the right when we’re trying to move left?
Of course a bug bounty is a good addition to…let’s call it, a repertoire of options that you have for security. But I’m also wondering as well, what can a company practically afford?
We have different options, starting from the very left, when you are creating the design of the software, all the way going to your test automation and building automated capabilities. So there are software as well that can just automatically find these basic vulnerabilities that we have seen. And then all the way to the releases.
So every step that somebody would integrate security into would come at a cost, like starting to secure your release management systems against supply chain attacks. That comes at a cost as well. Trying to segment your networks into different areas, that also comes at a cost.
So if for example as a company you get assistance in the beginning, or even just the basic framework, like where are the areas that I can do when it comes to elevating security, and then maybe put a price tag on those areas and decide what you can afford.
Because I don’t think it’s realistic to expect that all these companies will have to cover all these different areas and then all of them come with a price tag and then say yes, I’m going to do this. I think at the beginning they would be dead in the water before they even release their first product.
But as long as they can find which ones they can afford, and then put in the effort and the investment one at a time, until they get to cover all of these things, I think the technological plane that we have today, it will be elevated when it comes from the security perspective, and we will be all the better for it.
Yeah, I can see that. Now, we’re talking about all these costs for the software developers and the companies putting their products out there, but sometimes you hear people talk about attacker cost, and raising the cost of the attack, whether it’s actual money, they need special equipment, or whether it’s a cost in terms of time, is that something you think might be effective?
Yes, indeed it is. And this is also a challenging area, because if for example, the target of the attacker is really of high value to them, if the cost is less than the value of what they’re trying to attain, then for the attackers, unfortunately, it’s still valuable for them to perform an attack.
So from my perspective, this is actually a simple calculation. Will the attacker perform the attack? So it would be the value of the target minus the cost of the attack. Some examples that I can think of in the past…for instance, a very popular one is Stuxnet.
When we saw Stuxnet in 2010, we saw that they had, if I remember correctly, four zero-day vulnerabilities. And that was surprising, we thought, at the time, because we never saw nation state attacks. Who would waste four zero-day vulnerabilities in one malware? I mean, these things are quite expensive if you, for example, sell them on exploit acquisition platforms. So that was the cost of the attack. But when we realized that if the point of the attack is to delay the nuclear enrichment program of one country, then all of a sudden it’s a very cheap attack.
Yeah, what’s the price tag on nuclear weapons? Yeah.
Exactly. So this cost and value is actually relative. So what is the value to the attacker of what they’re trying to get? Versus what would be the cost to them.
Looking at this cost versus value of an attack, what are we to do? As people who are trying to protect our organizations, to protect our businesses, and to protect our personal lives against these types of attacks? The main thing that we can really do is make these attacks more and more expensive for the attackers, essentially.
So the more we move towards security by design, the more expensive it is, for example, for, maybe we can call them the bottom feeders. For the attackers that are just blasting away out there, trying to spam everyone, trying to grab who are the ones who would fall prey to this. Because those are the cheapest type of attacks that they can do. So if you can weed them out, and then you can clearly say OK, I don’t have anything that’s clearly of value to everyone who is doing commodity attacks.
And then we go to the next level. If you have something that’s truly valuable, what should be the protection layers you would need in order to secure those? So let’s say an organization, segmentation of networks, implementing Zero Trust models, that somebody even with an account already authenticated into the network would still need to authenticate whenever they need to access certain data points in the network. So it allows that even if one user is compromised, that it’s not that they can necessarily access everything within this organization. So just adding layers as well, of protection capabilities against the data you have helps elevate, as well, the cost of the attack.
And of course we can look at this as well from a ransomware perspective. Let’s say ransomware threat actors, they have split the responsibilities when it comes to these attacks. So some threat actors, they will purchase network profiles from other threat actors, and those are then on sale in the dark web. And after they purchase those, that’s what they use in order to perform, essentially, their attacks.
So how easy or how hard is it to circumvent these types of attacks for an organization? How static or how dynamic are the structures of their networks? And how easy is it for attackers to profile their organizations even?
So for example, even in the part where the attacker is still profiling, if we already put hurdles along the way, like for example, if there are protection capabilities you have put in your network, cyber security products, if the attacker is very quick in trying to perform their profiling, then it’s very easy to find them because they’re very noisy.
So for example, just having that basic hygiene, already makes the attackers that are trying to make a quick buck, it already kicks them out of the game. And then you are left with those low and slow attacks that would take some time. And you would also hope that there are capabilities that you have put in place in order to find them out, before they actually end up selling your network profile.
This makes perfect sense to me, but can you think of some examples we can give where this has happened, where raising the bar for security has made life more difficult for attackers?
Let’s say the passwords that we have. Previously, whenever we create passwords online, we create passwords that are very memorable for us, that a simple dictionary attack could guess in less than a minute. Then pretty much your password has been brute forced and then somebody knows about it.
But what certain organizations have already done, for example, is that whenever you create a password, you can already see the password requirements and complexity as you’re creating your password. And then there is an indicator there that shows, sometimes, that “This is strong, this is a weak password,” and it will not even accept those passwords that don’t qualify with the requirements. So that alone is already creating passwords that are more secure by design. Because you are already training the user on how to create passwords that are secure at the beginning.
And then in addition to that, there are also these sites that when you input a password, and you made a mistake more than three times, or four times, it will lock out your account. And this actually thwarts brute forcing attacks against these websites, because how much can you do with brute forcing when you only have three tries?
So these are some of the things that make these types of attacks fewer, especially when it comes to the web services we have. But of course, if we still allow them to continue to input an infinite number of passwords as possible, then the attacks would still exist. But at least this is getting better and better already, through time.
Now on the topic of passwords, I’ve sometimes noticed that when I mistakenly enter my password somewhere, it seems to me that it takes a little bit longer for the response to come back than it would if it’s a passing thing. So is that a hurdle that somebody’s put there purposefully so it’ll cost me more time?
Yes. We actually even have products that do that, so for example…well, I’m going to talk about one product we have that handles password management. The more mistakes you have with your master password, the longer it takes for you to be allowed to log in, to try again. Which is also a security capability, definitely, on this.
Wow, that’s pretty cool.
Yeah. Another example, by the way, of how elevating the cost of security will help thwart some problems is when it comes to HTTPS. So something as simple as encrypting your communications over the wire or over the air nowadays, and then removing the attack vector of sniffing your communications, for example via internet cafes.
I was already working during a time when you can go to an internet cafe and if somebody logged in via their Facebook account and HTTP, you can actually see in plaintext their username and passwords. And with the recent mass adoption of HTTPS, this sniffing using these capabilities has been eradicated, maybe not 100%, but almost eradicated nowadays.
Is it just me, or…it seems to me that mobile platforms are more secure by design than some traditional technologies. Is it just my impression or is that true?
It’s definitely not just your impression, and it looks like the mobile platforms have also learned from what’s been happening with the desktop platforms, which actually gives us hope that we can actually learn.
We are getting better.
Exactly. So we are getting better. We can learn from what’s happened in the past, and then use that to build more secure operating system models.
So for instance, in a mobile platform, by default there is this sandboxing technology. So the idea is that applications are executing on their own sandbox that doesn’t directly impact the other applications in other sandboxes.
So let’s say you have your phone book, or the application that you use to make calls. So if that has access to your phone book, another app that is installed that may be Trojanized in your device could not access your contacts, which is very valuable.
One evidence for this Android, or iOS, being more secure and harder to exploit than the others, is for example, one exploit acquistion platform, Zerodium. If you can see what are the prices, the bounties for the vulnerabilities that they acquire and then you compare the desktop payouts versus the mobile payouts, it actually is directly proportional to how hard it is to exploit those platforms. So the desktop payouts, for instance, they can go up to a million in Zerodium. And then for mobile, it can go up to $2.5 million US dollars.
Okay. So be honest with me. Do you think we’re ever going to get to a point where creating secure technologies, following good security practices, is the norm? Like that’s what everybody’s doing? And we sort of look back on this era of constant data breaches and attacks as sort of the bad old days, the Wild West days of computing, like can you believe it was like that in the beginning?
I have been in this industry for almost two decades, and I can actually see improvements. I’m not sure if in my lifetime I would really be able to see that everything is already secure by design – that would be a dream.
But for example, in the early 2000s, we had what we call red alerts, or A levels, with network worms that were spreading almost every other day. There was always some network vulnerability. And when it comes to these network vulnerabilities that have the capability to perform remote code executions, then it just spreads like wildfire all over the world. And the thing with that is that it kind of died down until we saw WannaCry in 2017.
And my experience with this is that some of the more challenging security problems, they have sort of become a thing of the past. But there are also things like supply chain attacks, which are quite challenging as well, because they spread out to such a wide volume of organizations and individuals.
We are quite good at solving the security problems of the day, and we try to do our best, because of course, well, I would still like to believe in the good of humanity, that we try to do our best to elevate our thinking and our execution when it comes to security, to a whole new level. But at the same time, sometimes we cannot also foresee that with the introduction of new capabilities, we are also introducing another platform for a threat.
And this is the thing I’m hoping, that the time for the introduction of the technology, versus securing that properly, so that threats actually have a small window to play around, if we can make that shorter and shorter, then it would make it harder and harder, or costlier and costlier for threat actors to do this.
A very simple example is patching a vulnerability. I’m not saying that the moment today a patch is issued, you as an organization with all your hundreds of thousands of workstations, is expected to patch everything by tomorrow. But if we can make that window of patching smaller and smaller, it makes the window of attack for an attacker lesser and lesser. And that’s something that I’m hoping to see in my lifetime, that this window will become shorter and shorter.
Well, that’s easy to agree with. And with these noble thoughts, I think it’s time to wrap up today’s episode, and thank you for being with us today. Thanks, Christine.
Thank you for having me, Janne.
That was the show for today. I hope you enjoyed it. Please get in touch with us through Twitter, with the hashtag #CyberSauna, with your feedback, comments and ideas. Thanks for listening. Be sure to subscribe.