Threat Hunting is de laatste tijd een modewoord geworden in de branche. Maar waar gaat het over? Waarom zouden bedrijven het opsporen van bedreigingen moeten gebruiken als onderdeel van hun beveiligingsstrategie? Connor Morley, threat hunter bij F-Secure, kwam langs voor aflevering 35 van Cyber Security Sauna om te vertellen hoe hij bedrijven helpt een proactieve benadering van beveiliging te gebruiken.
Luister of lees hier verder voor het transcript. En vergeet niet je te abonneren en een review achter te laten!
Janne: Welcome, Connor.
Connor: Hi there.
The term threat hunting within the infosec industry can mean a wide variety of things to different people, and different vendors use it in different ways. What’s threat hunting to you in your role?
Threat hunting is an extensive mindset with a distinctive capability. It brings together lots of different elements of what we already have, but equally a lot of proactive engagements with how to defend networks. It also allows us to be more tailor made with individual infrastructures and business needs. Threat hunting in general, as an overview, is the capacity to actively engage with defending people’s assets in their estates and networks by a constantly evolving approach to understanding and therefore mitigating offensive capabilities.
Okay, so is this related to the “assume breach” mentality? Where you sort of assume that you might have attackers in your network, and you’re using different scenarios to look for them and verify, like, maybe this has happened before, let’s see if we can find any evidence of it.
To a degree. We go by the philosophy that all preventative measures will eventually fail, so threat hunting works more on the detective side. So if we go by the assumption that all preventatives will eventually fail due to the nature of attackers being very creative and persistent, we therefore engage in understanding their methodologies and TTPs in order to detect them when they do eventually break into a network.
This can be done by either actively detecting through standard rule setting and tool sets that we actively engage and develop on in order to detect malicious behavior, or by vetting for very specific threat actions which we call hunt sprints or use cases, which can be down to newly released exploit code or vulnerabilities we become aware of, like BlueKeep and things like that. Once that was aware, we were able to devise hunt sprints specifically for indicators of compromise for that specific exploit. And we were able to sweep across all of our clients’ estates in order to find if it had happened.
Okay. So when the next new hot exploit comes down, you guys can immediately answer that board member question, “Is this something we need to worry about?” Not maybe so much about the future, but like, have we already been compromised by something like this?
Yeah. We are able to basically devise a sweep method in order to determine if they have been compromised previously, or allegedly compromised. Basically were are then able to incorporate it into our technological stack, so that we can develop detection capabilities whether anyone tries to compromise them using the same exploitation, which gives the full scope of legacy and future protection across all of our clients.
I see. Is this a new approach? Where did threat hunting come from?
Threat hunting fills the gap in the industry that exists between offensive capabilities and defensive capabilities. So for offensive teams you have tools like vulnerability scanners, which are the automated approach, and then you have the manual approach of penetration testers and red teamers. Both actively search for vulnerabilities on your network, but the manual approach can obviously find things that are brand new, or unique to an individual estate and things like that.
Defensive capabilities rely basically on automated systems primarily. The sort of systems you find in a SOC, which are signature-based, which are devised by individuals’ research which has been published to production systems, and then it’s incorporated into the alert systems that the assessors use.
Threat hunting is basically the counterpoint of penetration testers and red teamers, in that it’s the manual approach to defensive actions. In that we don’t wait to be told what is bad, or wait for a machine to tell us if something has happened. We teach the machine if something bad has happened. And we actively and manually the particular compromises or concerns of our clients based on their needs.
I see. What are some of the myths or misconceptions people have about threat hunting? Like, what is it not?
Well, you have this misconception that threat hunters are constantly sifting through all the data that comes from every machine across all of our clients’ estates. Now, that in itself is a – we do go through manual data, but we can’t go through everything. For example, a single machine can generate thousands, millions of event logs per day. Especially if you’re talking about servers, and you’re dealing with connections, and application handling and things like that. So although we do this manual scanning, it’s not all that we do. As mentioned, one of the main things that we do is we incorporate a sort of detection mindset into our detection systems, so we’re constantly adapting our detection systems in order so that we don’t have to go through manual data. But our adaptations to our automated system is dependent on our manual detection capability. It goes hand in hand.
Oh, another one is that you’ve kind of got this real time approach to chasing an attacker out of a system if they do get detected. We don’t do hand-to-hand battle with an attacker in a network, you know, grappling with them in real time. But what we can do, and what we do, is use response capabilities in order to hinder their capabilities and frustrate their activity until we’re able to devise a fully–fledged remediation solution in order to kick them out of the network. So although we may not be actively, you know, they send an exploit and we’re gonna block it immediately if they do it – that doesn’t happen. But what we can do is track what they’re doing, how they’re doing it, what they’re trying to acheive, put in frustrations like bottlenecking network speeds, or isolating particular command protocols and things like that, until their internal response team, or our incident response team is able to devise a remediation to thoroughly kick the attacker out of the network and stop them from coming back.
That sounds fascinating. You’re actually engaging with a real live ongoing attack.
Yes, on a few occasions we have had clients who have been dealing with active attackers, and we’ve liaised with their internal teams to basically follow this attacker through their network, see where they came from, the initial point of compromise, and work forward to see what their objective is. Once you’ve got the initial point of compromise, obviously you can then plug that hole, but you still would then have to kick them out of the network. And that’s where then we liaise very closely with our incident response team, or if the client has their own response team, we liaise very closely with them to provide all the information and metrics and capabilities that the attacker is using, so that they have the full picture in order to deal with the threat.
I see, but how do you even end up in that situation? You read all these reports that time to discovery in an average organization is to the tune of one year, two years, so how does that even happen, that you engage with a live attacker?
Well in those cases, I guess they didn’t have a threat hunting team! In our case, because of the EDR tools that we use and the frequency in which we can pull that data, which is very, very frequent, we’re able to parse massive amounts of data through our analytic system and detect them in – I think, if we detect them in more than 30 minutes, that’s probably bad. I don’t think I’ve ever heard us not detecting malicious activity in – I mean, an hour and a half I think was the maximum, and that was to do with something else.
Well that’s still significantly less than a year.
Well, again, because we’re actively monitoring the systems. The other thing to understand is that threat hunters don’t work on a signature basis. So a lot of systems work by if you see something doing this very specifically, alert on it. Whereas attackers don’t always follow something very specific. Especially if it’s a a targeted attack. like an APT, they’re going to customize their attack procedures for their target. And so a lot of our analysis works on behavioral aspects. So if we understand how a network, or an environment is meant to be operating, and something is therefore doing something very deviant from that norm, we’re very easily with our eyes on the network able to spot that, and then engage with it much quicker than an automated system.
Again, even our automated systems that we use, we don’t use signature-based analysis. We categorize all of our automated systems by behavioral analysis. So we can categorize the specific clients. If a system is basically doing this very specific thing on this client, if that very specific tool is doing something else, we get an alert for it. So it’s then something that we’ll look into manually, and then we can pivot off of that and see if there was any additional – or even anything suspect out of that.
All right. Before we go further into threat hunting, was it always like this? In all the years you’ve been doing threat hunting, has the field evolved or changed?
I’ve been doing threat hunting for two and a half years now, and obviously when I arrived on the scene I was fresh out of university. But even in the two and a half years that I’ve been doing it there’s been drastic changes, especially F-Secure Countercept. The original tool set, talking to some of the original members of the team, involved only manually scoping through networks with no response capability whatsoever, the logging capabilities were limited, the automated scanning was again limited. But what the guys did back then was they, although the technology they were using was in its infancy really, their knowledge was incredible.
So over the years that they were working on it, that has been fused into how we now do things. And it’s changed our procedures, our tech stack, our understanding, our capability for individual environments and client needs. This is all evolved from what it used to be, but it used to be guys staring at screen after screen of data, the way that people have mythologized threat hunting to be. But that was for a very limited data set, equally on a very limited size estate. Whereas now, we do much more effective threat hunting over enormous estates, multiple enormous estates with extremely good results, and really high accuracy. I mean, some of the things that we’ve picked up have been very stealthy.
What about going forward? How much does research play into your role as a threat hunter?
Research is fundamental. In our concept of what threat hunting is, you cannot be a threat hunter unless you have a research element to what you’re doing. Research is our bread and butter of how we advance our understanding of how attackers are, how we stay on the front lines, how we keep on our toes, how we advance our analytical systems, how we even devise new offensive capabilities, just so that we can then plug the hole of how they work. And that breaks down into individuals in the team all having different areas of interest, which they are able to pursue at their leisure basically, to research what they find interesting. And this leads to a huge range of capability and expertise in lots of different sectors that is directly infused into our capability, our corporate capability and our defensive capability.
For example, I’m working on an area at the moment which is to do with UFI detection, which is based off research that was at a very large infosec convention last year. And I’m now devising methods to incorporate into our systems which allow detection of these very unique and very hard to detect malicious actions.
Is that based on the idea that you know an attack like this is possible, or have attacks like this been seen in the wild?
Both. So the research itself that I saw was theoretical, but shortly after that there was a malicious compromise of a large corporation which used that technique in order to fulfill its persistence mechanism. So it worked hand–in–hand – I began it based on theoretical research, but because it’s now theory, it obviously then leads to practical capabilities in most instances, finding a way to detect that obviously plays into our hands.
No, absolutely. Let’s say you have all this capability. You’re able to detect all these things, you’re ready to get threat hunting. How does threat hunting start? What’s the first day like? How do you decide what you’re gonna look at?
That is a tricky one. It depends. Again, some people have very particular areas that they’re interested in, and the members of our threat hunting team can focus on those particular areas. Threat hunting in general is more a sense of picking out the anomalous – basically finding a needle in a haystack. It’s being able to look at huge quantities of data, and our alerts and different taggings that we get, and being able to piece them together to devise which ones of these are worth investigating, which ones look like normalized behavior.
But if you’re going to get more specialized into how does a threat hunter decide “Oh, I’m going to do this hunt sprint today, on this particular thing,” that normally comes from the industry itself as a whole. So as I was just saying, the research that I’m doing is basically someone else’s research. Lots of the infosec community, being the infosec community that they are, are very open about the work they’re doing, about new exploits they’ve found, a very niche vulnerability and this sort of thing, which are then published onto their social media or Twitter, or individual corporation blogs and things like that, which we readily read through and we investigate and things like that. Which we then devise into our hunt sprint, we run across our estates, and then we devise an automated solution for.
So all your manual efforts are sort of based on the newest and the cutting edge, and you’re trusting your tools and detection capabilities and automation to handle all the old stuff, the basic stuff.
So we work on the idea of something called the PARIS model, which works as a gradual increase in trust into an autonomous system. So we take the theoretical concept of a new vulnerability or exploit or something of that nature. We then devise, as I said, a hunt sprint, which is our manual approach to that. We then devise an automated approach to that which goes into our system, which initially will have very low trust. Which we will then refine over a period of testing until the trust reaches a very high probability of success and very high accuracy. At which point it will then be relegated down to an automated system so we can then move on to a new manual system and then start the cycle over again.
I see, okay. So how often does an investigation result in an actual live attacker in a system? You mentioned that has happened a couple times, but is it rare?
I’m very thankful to say that it is quite rare. Finding an active APT, thankfully, is not something that happens every single day. But it has happened, and we have caught them on every occasion so far for our clients. It’s a bit of a rarity and it normally stems from a new vulnerability being released that isn’t patched and so forth, which gives people a foothold, which then triggers something else inside a network and on from there. Although we have had very rare instances where a group has attacked one of our clients specifically, very targetedly, and even in those instances, doing our defensive methodology of how we do threat hunting, has always yielded the results of we have detected them. Maybe not necessarily at the first step they got onto the network, but we will detect them when they try to do anything malicious anywhere else on the network.
Again, one of the misconceptions is that threat hunters will detect as soon as an attacker gets a foothold on the network or on the estate. That’s not normally the case. Because the estate can be so vast, a foothold, especially if it’s a new vulnerability, can be very hard to detect. But what we will do is that any malicious activity that is more or less uniform for all attack groups and TTPs for data theft, or exfiltration, or persistence, or pivoting through a network, or memory injection or things like that. These are bread and butter techniques for attackers. Anything like that, or anything new that we are now manually hunting for, which again, it’s based off of cutting–edge research – we will detect. And we will find what they’re doing. And then when we do, we’ll trace back to how they got into the network, find that vulnerability, and then move forward with the remediation.
If I was an infosec buyer, the argument of being able to detect attackers every time would sound pretty convincing. So you’re saying there’s never been a case where an attack was detected later to have happened while you were actively threat hunting in that environment?
Not to my knowledge, not currently, no.
That’s pretty impressive. So these actual live detections you’ve faced, can you give us any details or juicy stories about them?
I suppose the most recent one that’s relevant, which is also a story into why you should patch your machines, is that we had a client who was actively compromised by a hands-on keyboard attacker, based on a vulnerability that was identified weeks or so before the breach. Unfortunately the system wasn’t patched due to various infrastructural reasons and things of that nature, and eventually an attacker found it, exploited it, and got onto the network.
We initially didn’t detect how they got onto the network, but what we did detect is then they started trying to access the SQL databases. As soon as they tried this, we detected it almost immediately. We were able to limit them, frustrate their actions, and kick them out of the network. However, because the vulnerability still hadn’t been patched, the attacker kept hopping back in through this vulnerability.
So what we ended up having to do was every time they got in, they’d have about an hour’s window from the data coming back to us detecting that they got back in again, to us being able to put in the executions across the system that would kick them out again. So what we did is we devised with the client a number of strategies to frustrate their activity. Which involved blocking particular ports, which, although it didn’t stop them from sending commands in, it did stop them from receiving the response back to their control systems. So they never knew if what they were doing was working.
We then started limiting the commands for particular internal systems, which, again, greatly limited what they could do, obviously because they couldn’t see the response messages, what they were trying to do stopped working. We also isolated the accounts they were using and limited their administration capabilities, which means when they tried to re-enable these particular systems that we then frustrated, it obviously didn’t work. And eventually, they did give up, even without patching the machine.
Now, that is not to say do not patch. Always patch your machines. But this is a story of if they’d have patched it when it was notified to them, if it was possible, the attacker wouldn’t have got in in the first place. In this case, the attacker was detected and kicked out before they got access to anything of potential value or could cause any damage, but this may not always be the case because some attacks take place in a matter of five to ten minutes, not necessarily hours, so patching is fundamental.
What I like about that story is that in the industry we are always talking about defense in depth. But you’re also using that – you’re giving ground to the attacker, you’re not stopping them cold, which would alert them that they’ve been caught, their actions are now known. You’re giving ground, you’re frustrating them, you’re slowing them down, and they’re spending all this energy and time on troubleshooting while you’re working to make sure that you know everything they’re doing, make sure you understand how they got in, and are able to evict them from the system once and for all.
One of the most dangerous things when dealing with a hands-on keyboard attacker is to alert them to what you’re doing. As you’ve mentioned, we go to great lengths to be very stealthy when we have detected them so that we don’t tip them off that a threat hunting team such as ourselves is onto them and is tracking them through the network.
The reason that we do this is twofold: One, if they’re aware of what we’re doing, they may suddenly change their TTPs, they may change the way they’re attacking your system. Which, if we’re already aware of how they’re doing it, an attacker is very rarely going to change that unless they have to, especially if it’s working. So if we know how they’re doing it, we can then trace them very easily across the network and they’ll be very visible to us.
Secondly, we don’t do it because some attackers, if they are detected, they’ll go nuclear. They’ll cause as much damage as possible in the shortest amount of time, basically as a last–ditch attempt to achieve – especially if they were aiming for damages, they’ll just trigger everything they can at once. So we don’t want that to happen. So by frustrating and limiting and putting in these boundaries and limitations, which as you say causes their troubleshooting, it allows us to still keep track of how they’re doing things, and also to come up with a full remediation plan.
One of the key things we notice is that if an attacker has a foothold into a network, if we bottleneck that connection down to five kilobytes, you know, really limit that connection down to minimal, they will still wait to use that foothold rather than make another one.
Because it kinda works.
It does work. And we’ve seen this time and time again. And it allows us to basically play around with the attacker and make sure they’re not getting anywhere they shouldn’t, and keep track of if they are getting close to a sensitive system, that the client is informed and they can take steps to prevent any sensitive data or information getting leaked and that sort of thing. Or equally, just move the machine, just take it out of scope if that’s possible. And that allows us to fully eradicate the threat.
So where does threat hunting fit into an organization’s overall security strategy? What’s the relationship to other technologies like EDR?
Threat hunting is an element all to itself. It doesn’t incorporate into other elements of an internal defensive structure. But it does utilize different elements in a defensive structure in order to be most effective. So threat hunting doesn’t override the need for penetration testing and SOCs and things like that, but we utilize the information gathered by these individual systems such as EDR and red teaming responses and things like that, in order to constantly adapt our capabilities in order to readily defend a unique environment as best possible. So in an overall defensive security strategy, threat hunting is a very niche element to your security needs in order to have the most effective defensive capabilities you can for your unique estate.
So it’s not the first thing I would incorporate in a new company, but if I want to be reasonably sure attackers can’t get in, it’s a crucial part that I still will need on top of everything else.
I would say so, yes. Threat hunting tends to be for people who are more readily targeted by active attackers. So the more ripe the target, the argument for threat hunting becomes more and more relevant. If you’re a very small business, the need for threat hunting capabilities may be a bit of an overkill, but for bigger corporations with intellectual properties or large databases or who handle important documentation, so on and so forth, threat hunting is more or less a necessity nowadays.
What about companies with a history of breaches?
Again, depends on how the breach came about. If the breach was just by poor digital hygiene, that can be solved by other things than a threat hunting team. Whereas a threat hunting team would be very capable of detecting future breaches in those cases, if the corporation itself felt that that was a concern.
Can you talk a little bit about the relationship in threat hunting between manual labor and automated technology?
So, Countercept uses an amalgamation of both. I’ve mentioned a few times how we take cutting–edge research and how we develop manual approaches and then incorporate that into the automated structure. We don’t do that necessarily through a development team. We do have a development team who have made an incredible platform that we do use. But the Countercept team and the threat hunting team that I’m a part of readily developed internal systems and capabilities based on our human labor. So we have this tradeoff where we actively hunt for the newest or hardest spot threats than an automated system would have no idea what it looks like.
But we are also constantly developing our automated system based on our manual research and investigations in order to make it the most accurate and capable detection system that we can possibly make it. And that is based off of individual malicious codes, or malicious behavior aspects, or known TTPs, or even just niche criteria that can be tagged to specific actors, like hack tool sets or things like that. So in a day-to-day threat hunting element, we have this large and very powerful automated system that is built and is accurate because of our manual input system in order to improve it and sharpen its accuracy, which we use to detect previously researched and manually accessed malicious actions, while at the same time continuously using manual detection procedures in order to find new or niche attack capabilities, which we will then fold into the automated system.
So it’s this constant manual development to put into an automated system to move onto a new manual approach that hasn’t been seen before. So we’ve constantly got this balance between known and trusted detections that we’ve devised ourselves with a new manual detection into things that are very unique or just brand new.
Okay. What about the threat hunters as people? What are some of the qualities that make a good threat hunter, some of the skills that are needed?
So, from the guys that I work with, one of the key things is to really have passion for what you’re doing. Everyone on the threat hunting team that I’m a part of really enjoys the work they’re doing and takes pride in what they do. Attention to detail, keenness to learn, basically being hands-on, being willing to jump in at any moment if something goes wrong. These are key skills, key character traits for any good threat hunter. And that incorporates into both the research side, which obviously we depend on for our capability enhancements, but equally the day-to-day cooperation, especially when incidents do happen or a compromise is detected, the guys that I work with will jump in regardless of the situation to help out, and really dig into the details, and really go very, very deep into what’s happening in order to provide the most accurate report and analysis on the behavior that we’ve detected.
What do you need to know to be able to think like an attacker? You’re often referring to TTPs, so knowing the tactics, techniques and procedures of the attackers will be crucial, I’m guessing.
That is a fundamental. All of our team are OSCP-trained at minimum, in order to give them a form of training into the area of offensive security. But in order to have the attacker mindset, it’s that application of – can something be misused? How can I misuse it? Can I make something do something it’s not supposed to? As you said and as I’ve said, with the TTPs, if you understand things like kill chains, and standard procedures and exfiltration, and targets, data theft, and things like this, that all ties into if an attacker is on the network, what are they possibly going to try and do? How are they going to get there? What persistence mechanisms are they using? What exfiltration methods are they using? How are they moving through a network? That sort of mindset.
And it’s this sort of checklist you have to yourself, of if you understand how they’re going to go about doing it or what they’re possibly going to attack, you can work from that point backwards to understand how they got in, but equally, you can then move forward from that point to understand where they’re trying to go.
Absolutely. So how did you personally get into this field? What is your career background?
My career background is a bit short. I worked for a refurbishment plant before I went to uni, fixing laptops and mobiles, got into a low level understanding of how they worked, went to university to do security. But I initially wanted to be a pentester, like a lot of my university friends. But I happened to come across this threat hunting role, and when I read into it, and obviously met the guys at F-Secure Countercept, and really understood what it is that they did, and the heavy side of research that they do here, it was a no brainer for me really.
So you went straight into offensive security.
Effectively, yes. I was in a computing security degree at university, so thankfully, it just tied in together. But even when I was a teenager I was on computers all the time, and I started doing some very basic programming and that sort of thing. And that’s obviously what sparked my interest, got me the refurbishment job, got me through university and now I’m working at extremely technical levels, which…I love my job.
Absolutely. So what’s the future of threat hunting? What does that look like?
So threat hunting as it’s moving, just in general the security industry, is moving toward things like a zero trust model. So one of the main factors of compromise nowadays is internal threats, where an employee of a company is used as the point of attack, or is the attacker. And because they already have access to the system, it makes it much harder to detect and associate with a particular person, especially if they understand how the systems work.
So the zero trust model that a lot of – or basically, that the industry should be moving to – is that all action is deemed as untrustworthy, so therefore needs authorization and categorization, which means that there can be no action on an estate that isn’t associated or categorized to a particular person or a deliberate activity. And if it then falls out of either of these, it can then readily be detected.
One of the other things the industry is moving away from is blacklists. So at the moment blacklists are getting longer and longer, because there are so many ways for an attacker to get around things, so instead they are moving to primarily a whitelist basis. Which means that software that’s run on the network or on an estate will be whitelisted and authenticated by the client’s security teams before it’s allowed to be used. And anything that deviates from the allowed procedures or technology will therefore be automatically be banned.
As for threat hunting, this ties directly into this sort of element where we will be adapting to these and other industry changes in order to maintain our cutting edge detection for methodologies that may try to get around this, or find ways that can leverage this to their advantage, and things like that.
Yeah, both the trends you’re talking about, the move from static defenses and blacklisting to whitelisting and more dynamic services and the zero trust networks, those are fascinating concepts and I agree that the industry is moving in that direction, but on the other hand the industry is still struggling with trying to replace Windows Server 2008. So do you think we’re ever gonna get there in any meaningful sense?
I think because of the way the computing industry has moved, especially due to the publicity and scope of the damage that’s been caused by cyber attacks, and the impact on trust and effectively revenue for these companies, I think the push for a more secure standard of computing infrastructure will be forced through due to necessity, not just from a security industry standpoint, which obviously we’re all very keenly aware of, but simply from a general awareness of the implications of not employing these sort of capabilities. The threat of not taking advantage and improving security methodologies and even just general, as I said, digital sanitation, is too big a risk for a lot of corporations.
Right. So what advice do you have for companies who want to start to build a threat hunting team or want to outsource one?
The threat hunting team is all about people. You need to have the best people you can find to fill the roles of your threat hunters. They need to be keen, they need to be willing to jump in, and they need to have the technical know-how in order to facilitate the role. Although doing this in-house will require a lot of training, it is highly recommended that a threat hunting team be put in place for any corporation dealing with sensitive materials.
For companies thinking of hiring out the job of a threat hunting team, what you need to do is find a threat hunting team that you can trust to get the job done. One that you feel is going to be at the front of the line, and who is going to be able to jump onto any incident and handle it from start to finish until you kick an attacker out of your estate. The core thing to remember is that preventative systems will fail. So you do need to have a detection system in place. And there is no better detection system currently than an efficient threat hunting team.
Thanks for being with us today, Connor.
Thanks for having me.
If you want to find out more about threat hunting, we have a whitepaper out at f-secure.com/threathunting. And as an example of threat hunting research, Connor’s Killsuit paper is linked in the show notes.
That was our show for today. I hope you enjoyed it. Make sure you subscribe to the podcast, and you can reach us with questions and comments on Twitter @CyberSauna. Thanks for listening.