Skip to content

Trending tags

Episode 41| The Ethics of Red Teaming

Melissa Michael

29.06.20 31 min. read


Red team testing is somewhat intrusive by nature, as it involves breaking into companies – albeit at their request – to help them improve their security. Red teamers must bluff their way past receptionists and hack into employee computers, things that would put anyone else in a lot of trouble. So at what point do red teaming activities cross the line into being unethical, or even criminal? For Episode 41 of Cyber Security Sauna, F-Secure’s veteran red teamer Tom Van de Wiele stopped by to share what a red teamer is not willing to do in the name of security, why cyber security experts need a sense of ethics, and how red teamers and companies alike can make sure that their own ethical concerns are addressed.

Listen, or read on for the transcript. And don’t forget to subscribe, rate and review!

For more information about red, blue, purple and gold teams, see the F-Secure Guide to Rainbow Teaming.


Janne: Welcome, Tom.

Tom: Thanks.

So let’s start at the beginning. When a client orders a service, do you ever get asked to do a red teaming test that’s not strictly towards the company, like on a third party, almost?

That happens sometimes. Sometimes the customer knows and sometimes they honestly don’t know. For example, they own the building or they have a certain responsibility over the service, but it doesn’t necessarily mean that you can just go and test that for security holes or try to intrude it.

How do you know that? How do you know who owns the building? There was a fairly public case in the US recently, where some red teamers cut into some hot water because they were performing an assignment as ordered, but then somebody decided that they didn’t like this after all.

That was a very interesting case and it dealt with federal law versus local laws. Finding that out is part of our homework. And we try to do our due diligence there, to make sure that we are not breaking the law and that ultimately our customer doesn’t get into trouble either.

Tom Van de Wiele, Principal Security Consultant, F-Secure

Okay. Any other sort of ethical concerns you have right there in that first customer meeting?

Well, usually, if you have done this for a while, you can kind of read between the lines, as far as what the customer wants to have tested and why. You will always get the boilerplate explanation of, “We want to do this for security reasons,” but there’s always usually another reason to do this.

So you try to ask questions, as far as what defense mechanisms does the customer want to have tested? But sometimes discussions come out where they say, “Look, we really hate this particular supplier, and the only way that we can kick them out is by giving them a really bad review. So if you guys could just do your tests, but whatever you come across from vendor ABC, make sure it looks really bad,” for example.

There we have to push back and say, “Look, we need to treat all vendors in the same way. We cannot write the report there in the pre-sales meeting, even before you’ve started.” But these are examples of things that do come about.

Do you often get sort of asked to put what the customer already knows to be the case, to put that in your own words so that it carries more weight because it comes from outside the organization?

Sometimes that gets asked. Of course, sometimes we come to our own conclusions and then it turns out that the customer was right. But sometimes the picking of the words can be very specific and whatever you write can be used as a stone to throw at someone or a company or what have you. So, we try to be very detailed in how we put certain things into reports and try to add as much context as possible. So it doesn’t become a black and white statement that can be used for whatever purposes of someone that has it in their hand and wants to use it against someone else.

Right. But then again, often, things like red teaming assignments are ordered, so almost to also show internal stakeholders what the reality of the situation is. Do you have an ethical problem with that?

Well, it depends really. You’re going to hear that sentence a lot of course, because all situations are different and there’s always context to consider.

Ethics especially.

Ethics especially. Also, because when you really look at the whole definition of it, the whole ethical part comes in where something is technically not illegal, but could still have consequences for yourself, for others, the customer or even both. So, it’s kind of a common courtesy towards the people you work with or you work for, to determine where these ethics begin and where they stop and where you hit a certain gray zone. Where you need to discuss certain topics with your customer, as far as what you will do, what you will not do and how you will react or what decisions you will make given a certain set of parameters or certain situations.

So in information security, especially in red teaming, there are of course do’s and don’ts in the industry, as is with every profession. Those get ultimately translated into best practices. But you’re not going to be able to find a textbook that is able to outline you every single consideration you need to make when, for example, dealing with customers that require red teaming or where you have to do a certain job, or try to get into a certain service or read someone’s information or work data.

So, cybersecurity professionals need to be aware of the many ways in which their actions or their inaction might have a significant impact on someone else’s life, someone else’s work, the company as a whole, and either now or in the future.

Can you give us examples of, where do you the line? What kind of assignments have you actually turned down?

We’ve turned down assignments where the customer asked us to get into services or facilities that they didn’t own. So think cloud services, to try and hack those services. Yeah. Okay. But we’d probably need their permission for that first.

I mean, obviously we can try to steal someone’s password if that’s allowed and try to reuse it. I mean, for the service, there will be no difference. But directly attacking that service, that of course we cannot do because that’s illegal.

There’s’s other examples, where someone has asked to target a specific person. That is really tricky as well because we cannot single out any specific single person or even a department of people, if the department is too small and the amount of employees.

Sometimes customers ask us, “Can you give us a complete list of every single person that clicked on the phishing link or that ran your malware simulation?,” or whatever it is. Because at the end of the day, it doesn’t really matter. We’re there to address the process and to see, does this company need more controls, more security measures when it comes to processes, when it comes to training, when it comes to technology?

But it’s not our job to help the customer kick out or chastise certain people within the company, because for us, it’s all the same. It’s just, everyone needs to follow the same process. There are no people that are more equal than others.

Yeah. I remember I came face-to-face with one incident where an international operator wanted us to assess this very specific piece of hardware that only had two or three users in the world at that time. Governmental, military organizations and things like that, and this company certainly wasn’t one of them. Even the premise was like, “Why do you even have this piece of equipment? Are you just looking for a list of vulnerabilities or what’s the situation here?” And we actually turned that down as well.

Yeah. Or it could be examples where the customer is asking to retrieve information, which might give the company itself problems with GDPR, for example. Because just having the information and having it stored outside of something that didn’t receive any security measures or data classification by that customer, that could already be a problem right there.

So it’s really important that as part of red teaming, that you discuss these things first, as far as, when we steal or receive confidential information. Where do we store it? Do we even need to store it? If we need to see if we can get into your customer database, I don’t want your customer database, but give me a few numbers, and I’ll tell you maybe a few characteristics of the data that proves that I had access to it. And from that moment, we try to figure out a way to see if we can test the response of your team, when I’m trying to download a piece of information or data that looks like it, but isn’t necessarily the real database. So that way, you test the controls and we stay out of that gray zone.

That makes sense to me. One edge case that popped up recently, I saw it on Twitter was, there was discussions about companies asking for security tests on the home network of their employees because everybody’s working from home. Is that something you’d be willing to undertake?

No. I don’t really know where these requests come from, because I mean, someone’s home network is not much different from sitting in a hotel room or in an airport or any kind of public venue, in that you need to consider the network compromised.

That should be okay because that is why companies harden laptops or any kind of devices that you take on the road, be it at home, be it wherever you are, and to make sure that you always have hard disk encryption, that you’re using a VPN, that you know what services you’re talking to, that you’re aware that these things are out there.

Your home network is no different. I mean, everyone’s home network at home is not going to receive the same level of attention or has the same amount of people working for trying to secure it. So by definition, you should consider it breached, so to speak. So, I don’t know where these requests come from because it kind of goes beyond the normal working environment.

Yeah. I don’t know either, because if you can use the laptop anywhere outside the company, then surely any place outside the company is equal.

True. If you work for a governmental defense agency or whatnot, that could be the case that that is required. But now, that’s maybe the 1% of all the use cases there.

It could also be that an organization doesn’t really know what they should be testing. And that comes back to if you know that a certain organization is not ready for a red team, then someone working in cybersecurity also needs to have the necessary ethical background to be able to first propose other initiatives, to get that maturity up before doing the actual red team tests, knowing very well that the customer doesn’t have anything really, to be able to detect or respond to the attack.

So, consider these networks compromised, consider them no different than a hotel room or an airport. And from that point on, you can go out and do your threat modeling and risk modeling.

That’s fair. All right, let’s say you accept an assignment and now you’re performing the services. And as a red teamer, you’re sending out phishing emails, installing malware on people’s computers, faking your way into buildings and invading sensitive areas to help the company find out the vulnerabilities, improve their security. Where’s the line? What activities are going too far?

First and foremost, you need to follow the law of where you are, local laws, federal laws, international laws. So, that comes first.

That’s also one of the biggest differences between actual attackers and us, is that we are bound by law.

Correct, but it doesn’t mean that our attacks need to be less effective, in that, if there’s certain things that we cannot do. For example, take identity theft. You can propose certain mitigation strategies to customers to say, “Look, this is what you can do to detect us and this is how you can respond to it. And this is to some degree how you can prevent it,” but it doesn’t mean you actually have to perform those.

My favorite example is always, you don’t need to trigger the fire alarm, to try fire safety. This is the same thing. So you want to always make sure that you are within the full bounds of the law. And with that, of course, comes certain do’s and don’ts.

You are not allowed to intrude on the personal living space of someone. Even if that was legal in certain countries, I mean, we would still not do it because it’s intrusive and it’s a little bit creepy even. And if someone is able to sit into your backyard, trying to do all kinds of IT attacks or cyber attacks or whatnot, they’re going to get in.

So, it’s not really up to us to say, “We’re going to intrude on the personal living space.” There are other people that do that. Private investigators in certain countries have that right or that it is within the law, but that we cannot.

Same thing when it comes to anything that has to do with the airwaves. We have to be extremely careful what we can do there, especially when it comes to a receiving, but also when sending. And also when it comes to any kind of communication towards the person, be it letter, be it email because there are some very strict laws on that too.

Outside of those laws, whatever you come across, you can use in red teaming for sure, but we do need to draw a line somewhere. In that, if it deals with very personal details of someone, we’re probably not going to use it. Will it work? Sure. But we also need to consider that it needs to go into the report afterwards and we will have to disclose it to some extent to do the rest of the company. So, that’s why we look at things, we’re very good at forgetting certain things, and we pick better examples that will better prove the point and will not get that person or the organization into trouble.

Then how about your personal ethics? Let’s say you’re a smooth talking a receptionist, for example. How do you feel about lying to that person?

Well, a criminal is going to do the same thing and a criminal is going to do it for the purpose of stealing information, transferring funds or whatever it is that they got into the back of their head that they want to do. So, I’m there to address the process.

We usually get what security awareness training companies are receiving or what employees are supposed to follow, and we try to find examples where the person should know better. That does mean a few white lies trying to get past the reception, using whatever excuse, where it says in their books that they shouldn’t allow certain cases or they should be able to handle certain requests for information in certain ways. And then we try to find the actual gray spots, the gray zones in between those. Situations where there’s no real playbook and where the procedure or the process turns into guidelines, where the employee themselves needs to start thinking for themselves.

That’s usually where things can go a little bit wrong, but that’s also what a criminal will do. So, that’s what we’re trying to replay. But again, we do it in good tastes because there’s different ways of getting past…

If you take your example of getting past the reception, I’m going to take a really extreme example now. You could pretend that you’re part of some kind of medical service and say that part of your family or friends or spouse in a serious health condition. That’s certainly going to get the attention of that person, but we don’t want to do it in a way that it will hurt or harm other people short term or long term. We’re allowed to cause a little bit of discomfort. Maybe we can embarrass people a little bit, trying to evoke some kind of emotion, but we cannot indoctrinate or harm people while doing this.

Yeah. Okay. So it’s different to say, “I have a parcel to deliver,” then to say, “Your father is dead.”

For example, or, “Click on this email because the CEO just died and it’s about your shares in the company.” We’ve seen this done, but we don’t do that because again, there’s different ways, more effective and more tasteful ways of trying that out.

Okay. Now, you work for an international company. I know you’ve done red teaming in different countries. How does geography and cultural differences play into the ideas of what’s ethical in red teaming and what’s not?

There’s a definite difference between different countries, different cultures. If you were to type in your password on your laptop in front of me, then I have the kind of natural courtesy to look away, to show you that, “Look, I’m not interested in your password. That’s something that belongs to you.”

But there are countries where that is not really a thing. And if you ask people to maybe look away while you’re typing your password, they will be a little bit stumped and ask you, “But why? We’re all employees of the same company.”

So, it is different in certain countries that I’ve seen. Also, when it comes to countries where respect and your hierarchical position in the company is extremely important, where consequently losing face is extremely important to avoid. And there, it becomes really difficult sometimes to have certain scenarios tested.

For example, when we discovered a weakness in a certain IT system and we verbally reported this to our customer, they asked us not to report because it would not bring any honor or respect to the family of the programmer that was responsible for that system and there you are.

So you need to find ways of bringing this information to the customer, because that’s ultimately what they’re paying you for, but there’s different ways of reporting it. So you want to make sure-

That’s super interesting.

It happens. It happens. We’ve also had cases, for example, where we were asked to look for maybe if we could get in through what’s called password spraying or password stuffing, is where you can guess a bunch of passwords or you reuse passwords that have leaked from different services that someone has used.

We try this technique out as part of our tests. And we see, for example, that we get in using very, very basic passwords. We do not put the passwords in the report because we do not want the report to become a weapon, in that once you have possession of the report, now you can access the services of these people.

Of course, we tell people to change their passwords after the security test, as part of some general updates or whatnot. But within that window of time, it might become an ethical situation where there’s a risk that someone is able to access those services and no one should have the sole responsibility for that. So, that’s ways that we kind of steer away from these kinds of sticky situations.

Yeah. I mean, I also love password dumps, when you see actual people’s passwords. There was a recent big dump, where there was a member of parliament and their password was something like, I love Rebecca and Wikipedia tells me that this person is married to a person named Lisa. So, that’s going to be an awkward conversation at home.

That’s an awkward conversation and maybe the guy can talk his way out of it. But we’ve seen situations where we have to target a certain department at a certain company, and that certain department is responsible for accessing a bunch of services for customers, which are really important for whatever reason.

Which means we only have to target that particular department, and those people have only those particular mobile phones or laptops. And then we look at the signals that come out of these laptops and phones, and then by accident, as a red teamer sitting at your desk in the middle of the night, you see signals coming out of someone’s phone or laptop and see that their wifi has previously connected to two of the biggest swinger clubs in the city.

That is where you have to keep your mouth shut and make sure that it doesn’t end up in the report or any kind of data trace, because it’s not really relevant to the project that you’re doing. So, you learn how to forget fast.

Yeah. Okay. What about the technological side? What’s legal on the internet or on public airwaves. Like differences in different countries, whether it’s okay or not to perform a port scan for example, or listening in on conversations.

That is extremely sensitive, of course, because that is bound to whatever country you are in, when it comes to listening in on conversations. And again, certain things fall within the law, other things fall out of the law.

Growing up during the eighties, I’ve had it several times where I pick up the then Chinese made portable telephone, one of the first models, and you hear the neighbor talking because they’re on the same frequency. There’s no real law that says you shouldn’t listen in on two people, but you just picked up the phone and you heard it. So yeah. Have you broken the law yet? Yeah, maybe, but if you just keep your mouth shut, then you stay within that ethical realm.

In the same way, when we perform our red teaming work, we have to be very careful to follow the law. If it goes beyond the law, that we know what we’re doing when it comes to listening into conversations, either digital or in real life, standing next to someone.

If you overhear a conversation where the company is going to be acquired, there’s going to be massive layoffs, there’s going to be a new flagship product or service being launched, these are the times that you hear these things, but you forget them. You keep your mouth shut.

You don’t go home and tell your spouse or whatever, because these things could have ramifications when people are in the know, people have shares, there’s some kind of monetary gain or loss to be had. So, you have to be really careful there.

Also, you mentioned port scanning. Anything that has to do with the public domain is a very sensitive area because there’s no real laws when it comes to public information.

So if I were to take every single YouTube movie and run it through a machine learning library, to be able to identify the person that’s in the movie, or try to correlate whether or not that person has been in multiple movies, either by their face, their body, their voice, what do you do with that information? Is it legal to store? Is it ethical to keep? Can it be stolen? Can it be abused? Of course. So, sometimes you don’t want to have the data. You want to stay out of the situations, but there are people that try to bend these kinds of rules.

No, that’s interesting. Because when you’re talking about information that’s publicly out there, you’re doing almost OSINT, open sources intelligence on it. It’s information that’s out there. You’re just using it for ways that maybe it wasn’t intended. So, when does that probing for something on the digital assets crossover into the questionable or unethical?

That’s really up for discussion. We had it with port scanning, where if you’re doing a port scan of the entire internet, which people are doing for research and other purposes, is that for good, is that for bad? Also, you have to look at the zeitgeist.

I mean, back in the day, port scanning the internet was something that no one really knew about, but it was still frowned upon. But then again, you could argue, it’s just a packet, it’s just a piece of information. But at the same time, we both know what kind of systems are on the internet, either intentionally or by accident. So, that could have consequences.

Just blindly doing a drive by scan of the entire internet has consequences. So, if you want to do that for research purposes, for example, you need to know exactly what you’re looking for and you need to be able to reduce the risk that you will hit something, which will have certain consequences that you might never know about because you might say, “I don’t really care that something goes down.”

Maybe there’s a hospital bed online somewhere or some kind of factory that loses power. “They shouldn’t have put it on the internet,” someone says, “and then it should be fine.” But that’s kind of sweet talking yourself into sticking your head in the sand and not considering the consequences of your actions.

Yeah. Okay. So what do you do if you can’t look away? Let’s say there’s a clear audit trail that you came into contact with some information, some highly sensitive data that was within your scope, but that still, you shouldn’t have. What do you do in that case?

Well, you discuss it with the customer, of course. You try to see if they can handle it in a certain way, where the customer might have had the situation before. So, you want to see what they’ve done before. If not, you want to take it to the legal department of that company, to seek legal advice as far as what the ramifications could be or how to take certain decisions at that point in time.

I know that sounds very high level, but just making sure that you know, what is legal and what is not, and if the NDA nondisclosure agreement that you’ve signed actually covers it.

Sometimes, addendums need to be made to NDAs. I’ve experienced that personally, where we’ve come across a situation where, by accident, I gained certain information that wasn’t supposed to have. But then you sign an extra document to say, “Look, I will now officially forget this information. I promise hereby, that I have not stored it anywhere. I’ve not told anyone about it.” And that’s usually the end of it.

But let’s say you download, for example, something that you thought was going to be one thing, but turns out it’s another, and now you’re digitally in possession of this information. Do you have to somehow prove to the customer that you haven’t made any copies of it or have disposed of it? How does that work?

Well, it’s hard to mandate and enforce, of course. And there, you are really bound to your ethical background in saying, “Look, I deleted it. This is the receipt, so to speak, of the software that I used to delete it or to wipe it rather.” So to make sure that it’s really forensically unretrievable and that should be the end of it. So, you need to take all the precautions that you can, to make sure that the data is not recoverable, and that there’s no other ways that someone could have retrieved the information.

No, I get that. All right. We already touched a little bit about reporting, but let’s talk about that a little bit more. So you’ve finished your testing. You’re creating a report. Are there ethical concerns to consider on how to communicate weaknesses and vulnerabilities to the companies, for example?

Yeah. So as mentioned, it’s very important not to single out a certain person, not to single out a certain department, if it’s only a few people. So you never want to have full names in the reports. Also, not the first letters or whatnot, with which someone can still figure out the names.

Now, having said that, when you do compromise someone’s workstation, again, agreed with the customer, those people need to have their passwords changed of course. So the customer does need to know which people need to have their password changed, but it doesn’t need to be in the report. That can be a meeting or a communication you do with the customer, where you help them select which ones need to have their passwords changed, or you do it as part of a general password change process, for example. That way, you don’t single anyone out.

So, no full passwords, no full names, not anything in the report that can personally identify someone within the company. The end result of the assessment, of course, and the report, be it good or bad, the consultant that’s performing the test has the responsibility to consider the entire picture when it comes to security weaknesses and vulnerabilities, by using or not using a service or an IT asset that might already give away information on its own. So you want to be really conscious about that.

Also, I mean, don’t forget consultants, whoever they are, who are guests to companies. Everyone always steals with their eyes and their ears. Even though it’s not being reported formally in a document, you will still bring that information somewhere else. So, whatever you learn about a company, that is not technically part of the report, you should try to forget as fast as possible.

But as I said, if there are certain things that might be a problem, we are ethically bound to put those in the report. If we see a crime being committed, fraud, embezzlement, these kinds of things, we need to bring that to the attention of the company. And they need to take legal actions because if not, you have the responsibility of dealing with it and we want to stay out of that.

So, it really comes down to, what is the right thing combined with what is legal. Because as that topic has been going on for quite a while, just because it’s legal doesn’t make it right. So we want to make sure that we do everything in our power not to bring ourselves, but also not bring the customer into any kind of trouble and to make sure that we have their interests in mind.

Let’s talk about advice to companies buying these sort of services. What would you recommend companies do to make sure that their own ethical concerns are addressed and that the red teamers they hire are going to behave ethically and within the scope of their local laws and cultural norms?

This is where experience comes in. So, you want to role play a little bit and to go through the motions of what the different stages will be of the project. From gaining information to abusing that information, to gaining some kind of foothold, running around or doing the lateral movement on their network, finding the information you’re supposed to find.

And of course, as a side effect, coming across information that you will learn about that you don’t technically need. And then trying to exfiltrate that information or to go for some kind of process that will result in the same effect on the radar scope of your customer.

You do a little bit of a risk assessment, as far as what situations or what information you will come across, what is fair game and what is not. And there, we have a pretty detailed list of things that we need to know when going through the motions and when doing these kinds of red team tests.

For example, a lot of the financial industry has red team tests performed. That also means dealing with people trying to walk into an office, trying to walk into a branch office. Banks are very interested if someone can just walk in and to make sure that their security investments actually make sense.

One of our questions is always going to be, the people that we’re going to come across, have they ever been in situations where they’ve suffered some kind of trauma or some kind of incident, a hold up, anything like that? If so, we want to select a different office or maybe see if we can have different people in the office in a certain time. Because we at least are not interested in having those people plunge back into trauma or to relive those experiences. Because for us, it doesn’t really make any difference so to speak, but we do care about who we’re performing these services on, and that’s really where good taste comes in.

No, I get what you’re saying. If I was talking to a potential provider of a service and they raised that kind of concerns on their own, then that would certainly tell me as a customer that these guys have thought about this a little bit, that it sounds like they’re on solid ground.

Well, it’s not just trying to get into companies like wild cowboys and trying to do things. It’s actually, you’re dealing with real people here, who are trying to support their families by working at an organization and trying to do their work.

No, that’s a valid point. But what about, for example, when you’re talking about bug bounty participants or hackathons, or when you’re not in direct contact with each and every participant, but these people are potentially going to be in a position where they have access to confidential information?

Well, this is kind of the big neon question mark on the wall. In that, yes, you will write out a bug bounty, again, when a whole set of circumstances and processes are already present because a bug bounty program can never replace your current security initiatives, vulnerability management and things like that.

But if that person, with all best of intentions and having clicked through all your terms and conditions, does end up seeing your production database, your customers, your intellectual property or any other kind of information, you need to be very sure and inform whoever is the stakeholder in that situation that they will have access to that. And that there’s a risk that, that information will get abused, misused, or that something will happen that the company is not expecting.

This brings us to a bigger point. In that, in order to protect data, you as an end user, either as a consumer or a corporate customer of ours ordering a red team or any kind of security assessment, in order to protect data, you want to either protect it yourself through software, hardware, a way of working, a process. But it will always involve exposing that data to software, hardware, the process of someone making this hardware or software or different people as part of a red team.

So, you want to be able to put your data somewhere for safekeeping, but in order to be able to test the safekeeping, you need to see if someone can get in. And if they can get in, they’re going to be able to see your data.

So it’s an inherent risk that you need to calculate when going for bug bounty programs, red teaming, and then knowing what the rules of engagement are when someone does burst through your defenses and sees information that normally wouldn’t be a meant for their eyes and ears.

Yeah. Well, thanks for helping us wade through the muddy waters of ethics, Tom.

Pleasure to be here.

That was our show for today. I hope you enjoyed it. Make sure you subscribe to the podcast, and you can reach us with questions and comments on Twitter @CyberSauna. Thanks for listening. 

Melissa Michael

29.06.20 31 min. read


Highlighted article

Related posts


Newsletter modal

Thank you for your interest towards F-Secure newsletter. You will shortly get an email to confirm the subscription.

Gated Content modal

Congratulations – You can now access the content by clicking the button below.