Earlier, I wrote that detection is where companies have the most room for development, and discussed what attackers are doing behind your back. I advocated that one should exploit the so-called attackers’ dilemma or intruders’ dilemma to turn the tables against hackers. What does that mean in practice?
For the sake of argument, let’s assume that some preventive security controls designed to protect you from breaches have already failed.
Am I suggesting that your defenses will fail?
In a word, yes. Eventually, if they haven’t already. There are plenty of reasons why you’ll probably never get your systems functioning well enough to resist all attacks. For example, you may have legacy systems that are critical to your business. They open up attack surfaces that are difficult to control and easily exploitable by attackers. If you run an operation bigger or more complex than what one team can handle, it’s possible (and maybe even likely) that you don’t even know what assets you actually have running in your networks. And you probably haven’t tested your level of exposure or your ability to contain a breach.
We’ve heard all the excuses for not putting more effort into securing systems. And no wonder: it’s hard, boring and unrewarding.
It’s only human to postpone the boring, difficult or laborious task of keeping your house in order. Most of the time it only makes you sound like an old nag who your coworkers want to avoid. When I occasionally fall into finding excuses myself, my friends call them Erkuses (Erka’s excuses for inaction).
At last year’s USENIX conference, the head of TAO (NSA’s attacker team) Mr. Rob Joyce made a rare public appearance and explained that the reason they’re so successful in getting into their adversaries’ systems is that they take the time to learn every little aspect of their target’s networks. In the end, they know them better than the defenders.
“We put the time in to know it better than the people who designed it and the people who are securing it,” Mr. Joyce said. “You know the technologies you intended to use in that network. We know the technologies that are actually in use in that network. Subtle difference, did you catch that?”
If Mr. Joyce’s shrug wasn’t enough to guilt CISOs around the world into launching vulnerability scans of their networks and red teaming their security controls, nothing will. Would you dare not to?
So let’s agree that it’s actually quite reasonable to assume a breach has already taken place.
Even attackers assume they’ve been breached
In my previous article, I argued that opportunistic attackers have a businesslike approach to being detected. A more professional intruder, however, is way more allergic to the idea of getting caught. Once inside the victim’s network, a professional intruder is constantly on the lookout to spot any signs of being detected and caught. They avoid making mistakes and try to tread lightly on the conquered land to avoid drawing unnecessary attention. They too assume that someone will eventually disrupt their operation(s). What you as a defender would call a successful detection or effective response will – in the eyes of intruders – constitute a breach of their mission.
Hence, attackers have a plan B that they activate if they decide their operation becomes compromised. If they are still in control of the situation, they simply pack up, clean up, and leave – only to return later with the help of a backdoor they set up while they were still on top of things.
The backdoor may be in the form of a so-called golden ticket. Or it might be as simple as a stolen password. One would imagine passwords to be easy for defenders to invalidate, and hence, have little value for attackers. But in reality individual users hesitate to change passwords, and postpone it until there’s solid proof that they’ve been compromised. On an organizational level, technical, procedural, cultural, and individual resistance will generate so much heat that most CISO’s burn their fingers before getting a grip on the problem. The habitual use of shared and “hereditary” default passwords are exactly the kind of stuff that Mr. Joyce’s merry fellows mercilessly exploit. And if stolen passwords don’t work out, a golden ticket will grant the intruder a free pass for as long as ten years after the initial breach.
The backdoor could also be an implant that sits in a suitable gateway node, or manifests itself as a vulnerability that the intruders know will be there for them to use at a later stage (don’t get me started on how easy it is to convince enterprises to patch early and patch often). The intruder may have left malware-infected documents in your document repository to re-infect your users’ computers at a later date. Or they might run a new phishing round or the old repair man trick to make your unsuspecting users invite the attackers in again.
Not only are the intruders uninvited guests, but chances are they will become reoccurring visitors. It pays off to start going after them.
WHEN preventive security controls fail, you need to rely on your ability to detect and respond to incidents.
If you can’t repel them, detect them
You need to detect breaches before you can give any reasonable consideration to mitigation. Sounds simple enough. And yet, this is where most organizations fail…badly.
While it would seem that attackers have an advantage here, there’s actually a lot that defenders can do to turn the tables on intruders. Everything the attacker does is bound to leave a trail of log events and configuration changes behind them.
Sure, the compromised system may not be able to tell you when it’s “owned.” A proper backdoor may have well-oiled hinges, but there is a chance that a firewall, web proxy, IDS, antivirus, or logon server has logged at least something. Make sure that something will be preserved and subjected to analysis. Not only will that give you a chance of spotting something, but it will also make it possible to travel back in time to reconstruct a chain of events once you learn more about the nature of the threat.
When attackers shift from using malware and exploits to “living off the land” and impersonating authorized users, they will work within the system. That is, they will log in as real users and will show up as authentication events and access log entries.
There are no ghosts in the Matrix. The trick is to stop expecting systems to send a distress signal when they’re taken over by attackers, and instead devote time to finding patterns and other telltale signs of intruders operating within the system. Unusual activity after logon events, users signing in on out-of-normal workstations, and abnormal network activity may be signs of an intruder piggybacking and impersonating a legitimate user.
Don’t just shoo the intruder away – respond wisely
As noted in one of our blog posts fresh after the DNC breach, the attackers hate to operate in an environment where they are being monitored without their knowledge. Just imagine the frustration when attackers slowly realize that their every move has been anticipated, tools have been copied, and that they spent the previous month chasing dead ends and collecting fabricated “loot.”
When you have real-time visibility to where the attackers are, you can focus your response efforts to where there is evidence to be found and mitigation is effective, thus frustrating the intruders. If the attackers are caught by surprise, they may be forced to run for the exit without having time to clean up, thus providing you with valuable information about their objectives and capabilities.
In practice, however, once the defenders have been tipped off about an intrusion, there is an inclination to move into “cleanup” mode as quickly as possible. Cleanup is noisy and signals the intruder that they’ve been spotted. Even worse, they also give away the defenders’ next steps and reveal the limits of their understanding of what was breached. Attempts to determine the scope of the breach, understand the actions taken by the intruder, and identification of the original injection vector are seen as secondary to getting the business back online.
Predict and up your game
Once you know what the infection vector was, what types of tools were used in the breach, and what other attack campaigns you can link them to, you can improve your game. Patch vulnerabilities, replace compromised systems, shine spotlights on dark corners, cut off attack vectors, and limit the overall attack surface. And just like end users who only bother to change their passwords once they know they have been compromised, most organizations need concrete evidence in the form of an incident or a near-miss before they take steps to improve their security.
As a bonus, you get solid evidence about how well your existing security controls reduced the damage or made the attackers’ lives more difficult. No more guesswork and hypothetical estimates, but measurable losses and verified wins.