[Podcast] Het cyberjaar: verwachtingen voor 2019, terugblik op 2018
Wat kunnen we in 2019 verwachten op het gebied van cyberbeveiliging?
Voor deze aflevering van Cyber Security Sauna blikken we samen met vijf experts vooruit op 2019 en bespreken we de opmerkelijke trends van 2018. Van telefonische phishing tot AI-trends, supply-chain-attacks, IoT, gegevensprivacy en meer: met onze ronde tafel blijf je op de hoogte van alle trends. Tijdens deze show verwelkomen we Adam Sheehan van MWR Infosecurity en Laura Kankaala, Tom Van de Wiele, Artturi Lehtiö en Andy Patel van F-Secure. Luister of lees verder voor het engelse transcript.
Janne: So Adam, looking at 2018, what was noteworthy to you?
Adam: I think the most noteworthy trend that we observed in 2018 was probably the rise in the mobile phishing threat to employees. That’s both SMS phishing and general phishing being sent to mobile devices. This is not only in terms of frequency but also in terms of effectiveness. So we’ve seen a dramatic increase in actually the amount of phishing that’s going through mobile devices, and also our own internal data shows a slight increase in how vulnerable people are to this.
Okay.
Adam: The reasons for this, I think, are quite interesting. I mean, obviously there’s a huge increase in the use of mobile systems that are becoming almost functionally similar to desktops and laptops. And I think also there’s an important and kind of subtle behavioral point here. You know, we have naturally different patterns of behavior when we’re feeling relaxed or at ease. And of course, you know, we tend to use our desktops or laptops in a work context, in the office, for example. Whereas mobile phishing catches us in weaker moments, I think. Maybe when we’re on the tube, or on a Friday night having drinks. So the behavioral science term for this is cognitive ease. And when we’re in a state of cognitive ease, we actually tend to feel more trusting, we feel warmer. And so we’re seeing that mobile phishing is actually a more powerful attack vector than email phishing.
Because we’re more comfortable with the device in our hands, we’re less likely to be suspicious.
Adam: Yes, so that’s a large part of it. I also think that, you know, how do we typically overcome phishing in general? We have security awareness training. And actually that security awareness training is very often done in the office, done at work. It sort of reinforces this kind of implicit point that actually security is something which happens at work. And when people don’t necessarily take it home with them to use on their mobile devices when they’re at home, the awareness training is kind of training people for one particular context, and maybe not one that’s even more impactful.
There’s an interesting behavioral science study that people who study for exams in a similar setting to where the exam eventually takes place actually outperform those who study for exams at home or in any given random context. And I think a similar thing is true for security awareness. So actually if we move the way we do security awareness training, being very office-centric, very at-work-centric, actually towards point-in-time training on mobile devices, there’s probably an angle there in terms of how we can head off this trend.
More on the technical side, obviously a lot of devices are unmanaged. We can move towards using managed devices, using mobile endpoint protection, and just generally tightening up technical controls on mobiles. Passwords as well, right? People tend to think, “Well, you know, I don’t necessarily need to protect it to quite the same way. Stuff on the mobile is less impactful,” and that’s increasingly untrue, I think.
I never thought about the idea of how the environment of the training affects your behavior. That’s interesting. Do you think it’s as easy for us to behave in a good way on a mobile device? I mean, you can’t hover over links to see where they actually point, and so forth.
Adam: Absolutely. In terms of our raw capability, before we even get to human psychology of the use of the devices, absolutely. I think there’s a lot of inbuilt challenges there. You can’t hover over links so easily and kind of scan these things. Very often it’s harder to test the URL, test the domain. There are challenges there before we even get to actually the differences in terms of how people use these devices and the fact that we’re naturally more trusting of them.
Andy: Are you seeing this primarily for regular phishing, or have you seen any examples of spear phishing done in this way?
Adam: I think the difference between spear phishing and regular phishing in terms of frequency, how much is in each of these, is probably roughly similar to what you get through email through a laptop. I don’t think there’s necessarily more or less. The nature of what we would classify as spear phishing when it comes to mobile devices is kind of interesting, right? For example, we get attacks where quite literally, it comes from the same thread as your bank, for example, in that sense that the bank is compromised. You know, that has the effectiveness, the believability, the trustworthiness of spear phishing, although maybe it can be done en masse. And so for that reason it’s actually particularly important that people understand how this can happen, how they’re actually vulnerable on mobile as well.
Artturi: In espionage cases, for instance, you don’t necessarily need to compromise anything other than the person’s mobile device, and you already get a view into a large part of their life. And there have been cases where suspected state-sponsored actors target purely the mobile devices of their targets via, for instance, SMS phishing, and try to get malware on those mobile devices, because most people have email on their phones already and they do lots of phone calls and text messaging and so forth. So that’s already very valuable from an intelligence perspective as well.
Adam: Yeah, absolutely.
What are your predictions for 2019, Adam? What are we going to see more in the next year?
Adam: Something I’ve seen beginning this year, and I think is really going to take off next year, is a trend of organizations being interested not only in what their employees are doing in terms of click rate, download rate, responding to voice phishing, that sort of thing, but also why they are doing these things. I think for too long there’s been an assumption that if Organization A has a high click rate, let’s say on email phishing, and Organization B has the same observable high click rate on email phishing, that they should be offered more or less the same solution. And in fact that’s like going to the doctor with a strong headache and the doctor saying “Yes, we have a pill that deals with headaches.” You know, actually in one case the underlying problem could be one disease, one vulnerability, one issue. And in another case, the underlying issue could be quite different. And so I think delving into the root cause analysis here allows us to actually look into getting fundamentally different solutions which are quite bespoke, and I think that’s probably really going to take off in 2019, that next level analysis of what’s really going on.
Laura: For companies it’s very important in the future to actually invest in training and actually the awareness part of security, because most of the time for the attacker, the path of least resistance is actually through the employees, through phishing and through these kinds of attacks. So I think this is going to be ever more relevant in the future as well.
Adam: Yeah, absolutely.
Tom, what was noteworthy in 2018 for you?
Tom: As far as trends in 2018, I think we’ve kind of hit peak ransomware, at least when it comes to the awareness part. Companies are aware that they can get hit by very opportunistic attackers that try to wedge themselves into their organization, either by internet-exposed assets or through phishing campaigns. We’ve seen lots of big companies getting hit and that has spurred other companies into actually focusing on this to see how they can be impacted and how they might be a victim of this. So that’s certainly a good thing, a positive thing when looking at the information security market.
Artturi: I think related to ransomware, I fully agree that it feels to me like the easy money, or the low hanging fruit, has been collected already and attackers are trying to figure out new ways of making money because it just isn’t as easy anymore. I think we’ve seen it on the consumer side for a few years already, that ransomware doesn’t seem to be as profitable anymore. And I think that’s in big part why attackers started shifting to extorting companies. But now companies are also finally waking up to this and I think as it gets harder to make money via extortion and ransomware, attackers will try and find the next easy money.
So it’s going to be cryptominers from here on out.
Artturi: I think cryptominers is probably one of the areas where attackers have been shifting to, again, because you know, cryptocurrencies became familiar to them if not before, then finally with ransomware. And since that’s something they know, they know how to turn cryptocurrencies into real world cash, so that they can actually go and buy their BMWs. Then, you know, cryptomining is kind of an easier jump.
Tom: Absolutely. And we’re going to see ransomware pop up wherever it can on a more opportunistic basis. But as you already mentioned, I think as an industry we’ve been successful through, you know, different methods on raising the cost of attack, and that’s ultimately what you want as an industry.
Andy: I wanted to ask…cryptominers. Do we know how much money they’re actually making? Because obviously ransomware, it’s a service. You need a whole infrastructure around it. You need support people who can help you get your Bitcoin and pay it, and you need infrastructure in order to be able to pay, or be able to give the key when someone pays and all that. Whereas cryptominers are free, and they’re nonintrusive. If someone has a cryptominer they probably don’t even know, and they wouldn’t doing anything about it anyway. But my question is, okay, there is an overhead to ransomware and all that. But is this cryptominer stuff making a net profit as much as they were with ransomware?
Artturi: I have no idea how high you can go with cryptomining. But one example that I recently ran into was a case from earlier this summer, where a researcher found multiple infected Docker containers on Docker Hub, an open repository for Docker images. And those malicious Docker images, they included cryptominers on them, so that whenever someone used one of those images as the base for their own work, they were, unknown to them, actually mining cryptocurrencies as well. And in that case, the researchers estimated that the attackers had made about $90,000 just on that single case, in the Monero cryptocurrency.
Tom: As a second trend, I would say that we are seeing more companies with questions aimed towards us and other security companies with regards to designing systems and solutions with privacy built in, not only because of GDPR, but GDPR is kind of the elephant in the room when it comes to that. A lot of companies have kind of gone through the GDPR meat grinder and it’s not exactly an experience they want to repeat. So, while trying to get away from the more reactive way of looking at things, companies are now – not all of them obviously, but the larger companies – we see them looking at design specs and more requirements when it comes to trying to prevent these situations from happening, and trying to come up with a design where privacy can be at least controlled.
That’s a very encouraging thought. I think as an industry for a long time, we’ve been sort of stuck in the same trenches figuring that nothing ever changes, people are still falling for the same phishes they were five years ago and so forth. But you’re actually seeing that things are improving.
Tom: Well, we see a slight improvement, and if something is to be learned, it’s that human beings will never learn. So I’m a big proponent of the prevent principal, and then kind of more the reactive parts like security awareness and other themes. So I sincerely hope for 2019 that companies and organizations alike put more money into the preventative side. And then the security awareness. I mean, you need both, but there’s a certain order to do them in.
Are you talking about awareness training, as in classroom training? Or everything related to that space, like phishing campaigns?
Tom: Well, phishing, I mean, doing lots of red teaming, we usually get in by sending emails. Now, I ask you and the listeners: Who is sending you all these Office documents from the internet? We have on-premise Sharepoint and file sharing services. If you really need to get Office documents from people that are not linked to your company, then set up a share drive or find some other way to interact with that person. But the fact that everyone is just given this right and privilege to receive documents which might contain malicious code for people who almost never have to receive Office documents from outside the company…I’m a very big believer in the preventative side and just replacing those methods with very specific file sharing services and having rules and policies about that, than to just allow a blanket privilege of allowing everyone to receive potentially malicious code and trying to stack security defenses onto each other. And then trying to provide security awareness training on top of that, saying, “I know the functionality is there, I know you don’t really need it, but please try not to click on anything.” And I think that’s a little bit backwards.
I don’t know, Tom, I think you’re going to be super unpopular when you enforce rules in companies where people can’t receive Word documents as email attachments anymore.
Tom: Oh, they can. They can, but you have to make it easier. You have to replace it with something else. If it means that I can easily, just like I would invite someone to a Skype call in Outlook in two seconds, I should be able to set up a file sharing link to someone that then that person can use to actually get the files to me.
Oh, I gotcha.
Tom: It requires a different way of thinking. It might not be popular, but it certainly helps by trying to split up these domains. And in the same discussion, we ask companies, “Show us the computers that you’re using that can access your payroll, that can access your most critical systems.” And the person in question points at their computer. And then we ask them, “From where do you receive emails from the internet, Facebook, Youtube?” And they look very confused at you and they point to the same computer. Now, incident response services, either internally and certainly externally, are not cheap, to say the least. So the cost of just finding out if something is an incident already outweighs the price of an extra computer or process that would split those domains and thus kind of reinforce that preventative recommendation.
Artturi: In general, I strongly agree, and I’m a big believer that we put too much blame on the humans or the human aspect of this and we have a long way to go in the cybersecurity industry towards helping people do more right or not make mistakes as often. I don’t think we can just blame people. But when it comes to file sharing, for instance, I’ve struggled a lot with trying to figure out what would be a good solution. On the one hand, it’s obvious that if we could reduce the usage of email attachments and just sharing Office documents, for instance, as email attachments, then when the bad guy sends you a Word document and wants you to click on it, it will be much more suspicious, because that doesn’t happen usually. But then if the alternative is to start using file sharing services for that, then the other popular method for bad guys to get you to execute their Office document is to send you a link saying, please go to this thing to download this eFax, or this invoice or whatever.
So do we then risk that we either train the humans to click on links in emails to file sharing sites and just open up whatever they download from the internet, or do we risk teaching them to just open up whatever email attachment they open? So is there some kind of balance or is there some third way that we could actually do it where we wouldn’t have to reinforce one or the other dangerous behavior?
Yeah, I mean, when I’m looking at a file share thing, I don’t find it suspicious if it asks us for my credentials, whereas when I open a Word document, it is more suspicious.
Andy: Not to mention that our in-house security training, phishing training thing, does send Dropbox-looking links to us.
Tom: Ideally, you want to come to a situation where that whole situation can be avoided. I would rather look at what kind of interactions are required for whatever part of the business, and to see how security can be enforced by not having to do that, but by having certain applications in place where those things would just be set up, will just work, again, in the same way that you would just generate a Skype link or whatever it is. So that the only thing that you have to do is go into the application, and if it doesn’t show up there, well, then something is malfunctioning.
Interesting.
Tom: Maybe one other prediction that I have for 2019, which will continue in the future, I think, is that automation and detection and response capabilities at customers are driving up the price for attackers performing targeted attacks. I mean, we perform lots of targeted attack simulations for customers, and we see a definite trend at customers where more and more software and services are being introduced because they are being hit by certain attacks, or because their competitors are being hit. And that increase in automation when it comes to detection, is of course discouraging some attackers and making it more difficult for other attackers to try and slip into companies in an undetected way. And I of course hope that continues because we want that price of attack to go up.
So what else caught our eye in 2018, Laura?
Laura: One very interesting trend is how privacy was impacted both positively and negatively. So we have GDPR, that’s a really good initiative to actually improve privacy for end users and consumers. But at the same time, we were faced with these big privacy breaches such as the Facebook breach. It affected not only the users of Facebook, but also the applications that are using the single sign-on feature of the Facebook application.
So when the attackers were able to get the access tokens of these users, they could actually log into these third party applications. And there are ways of doing this single sign-on solution securely, so that each time you log into this third party application, you actually have to provide your Facebook credentials again. So that prevents these kinds of attacks. But most of the single sign-on on applications that are using Facebook as an identity provider are not implementing it in this way. So what they’re doing is they’re actually sacrificing security over usability, which is a really common thing to do when you’re thinking about user experience. But at the same time, when breaches like this happen, it means that somebody can get these access tokens and be able to log into your applications – let’s say Uber, Tinder, and other applications that are using Facebook as a single sign-on identity provider. And they could potentially get into very sensitive details about you, not only like who you’ve been talking to you, but also the conversations that you’ve had with other people, where you’ve been, what have you bought, and stuff like that.
So I think people are starting to understand how much they’re actually trusting with these big technology companies, and it’s just data for them. And you’re hoping that they’ll take good care of it.
You’ve gone ahead now and invoked the ghost of GDPR in the room. So I have to ask the obvious question. Are we going to see big fines in 2019?
Laura: Naturally, I hope that everything has gone nice and solid and there’s nothing to worry about. But I’m afraid that we will see some fines coming up.
Andy: You talked about how it would be theoretically possible to get into someone’s Tinder or stuff like that.
Yeah, did you ever see anything like that happening?
Laura: No. For example, the Tinder part was research done by some information security researchers. I’m sorry, I can’t remember the name right now. It was hypothetical, most of the things that you could actually do with the access credentials. You could, for example, log in on somebody’s Tinder and read the messages there and they would remain in an unread state. So the person whose account you accessed could actually not even realize that you’ve accessed those messages. And for Uber, I think they were able to tip the driver with your access tokens. The investigation is naturally still going on, but the companies couldn’t find that anyone had actually used these. But we don’t know what’s going to turn up in the future, of course.
What about 2019, Laura? What does the future have in store for us?
Laura: This year, or also in the past, we’ve seen IOT growing, these purely IOT devices. So, smart devices and enterprise smart devices, but also just internet-connected devices overall. So as we’re seeing increased growth in those, I assume that we’re going to see more exploitation of those devices as well. This year we have seen devices being exploited from just poor password policies, to remote code executions, to DNS rebinding attacks and whatnot.
So I think this will be very relevant also in the coming year, and what I really hope will happen next year is that we will start to get some more regulations around IOT. So there would have to be more regulations on what kind of security levels these devices have to fulfill before they enter the market, and how does the automatic updating processes go for them, and just overall information security posture for these. Some of these are going to end up in the consumer household, for example. So we need some more concrete consumer protection there as well, for the consumers. And I think GDPR-wise, the GDPR could be extended to actually cover the IOT devices, or some other regulation could come in place that would extend the GDPR to actually cover these IOT devices as well.
Do you think we’re going to be seeing more IOT companies with bug bounty programs? And would that be something you’d welcome?
Laura: I would definitely welcome that. I know there are some problems with the companies enrolling in these bug bounty programs for IOT devices because, for example, the update processes can be pretty complicated. So it’s not easy to update IOT firmware or hardware for the bugs that are actually discovered. But from what I’ve spoken with the people who do bug bounties, they would be super interested. And they are already doing this to some extent, but there’s no platform to actually report these, like concrete platforms. So yeah, I hope that more companies would go for this, especially for home smart devices.
Tom: We see more and more people being introduced to hardware hacking, building IOT gadgets, finding out what the interaction models are for particular scenarios. And in the long run, that is going to help us in building the competence that we need right now, which we so much lack, which is trying to find ways of building secure systems that have security by design, that have generated dynamic and unique passwords when you take it out of the box, that have built-in software update services for infrastructure that you actually are paying for as part of the price. And it’s these kinds of things that will really help us ten years from now, not just by the industry, but also for example, the proliferation of hacker spaces which are more focusing on building things with hardware. And I think that’s a trend that I hope will come to fruition in the years to come.
There is still this fetish belief that the market will figure it out, and the market is not going to figure it out, because neither the buyer nor the vendor are interested in security at this point. It is not a selling point. And Mikko Hypponen hammers on this time and time again, and he’s right. We need to create solid incentives for security, and not just the sword of Damocles above your head because some legislation is going to come knocking down on you, but unfortunately we’re going to have to come up with a common set of requirements as to what we want the future to look like.
Artturi, you wanted to talk about supply chains?
Artturi: Yeah, I think supply chain attacks are something that has been talked about quite a lot already over the past years. It’s been brewing in the background for a long time. It feels to me like it’s becoming an increasingly common and big problem. I do expect it to increase in the future as well. In terms of what I mean by supply chain attacks, the supply chain attack that people most often think to is the NotPetya case from the summer of 2017. The way the ransomware initially started spreading was as a compromised update for the accounting software that was most popular in Ukraine. And that’s definitely one type of supply chain attack.
Another area that’s been talked about quite a lot is compromising a service provider, for instance, as a way to then gain access to the customers of that service provider. Another interesting area that I liken to supply chain attacks is breach of trust, in terms of you’re putting a lot of trust on the creators or the maintainers of software to continue doing what they promise to do. Continue providing the software that they say they are providing.
And for instance, there was an interesting case about two weeks back in Finland, where…People use ad blockers often in their web browsers. And then there’s lists of the bad URLs for ad blockers to block. And some of those are country-specific. So there’s a popular list of Finnish ad networks, or Finnish advertising sites, maintained by a Finnish person. And those get blocked when you have that set to automatically include those URLs. Now, the maintainer of that Finnish list of ad sites decided to make a political statement and actually made a change to a list, added a comment saying, because of some discussions between the Finnish government and trade unions about worker’s rights, and the person had a strong opinion on this. So they decided to also add the websites of the major Finnish worker’s unions to the list of blocked sites. And suddenly people were unable to access the sites of workers unions, because of this political statement. And this was not a malicious third party compromising a supply chain, this was just the person who’d been providing a really, really good list and keeping it up to date and so forth, suddenly doing something other than what people were trusting him to do.
But there’s also been cases of, for example, somebody running a browser add-on and then stopping developing it and selling it. Or the add-on getting misplaced somehow, and somebody gains access to it and starts using it for other purposes.
Artturi: I think that’s definitely another really good example of, you know, we’re putting large parts of our lives in the hands of others, where we don’t always realize how much we’re relying on others or trusting others. And we don’t really have a way of verifying that they are still worthy of that trust.
Laura: Yeah, I totally agree with you, Artturi, especially when it comes to developers going to the continuous delivery and continuous integration models. They rely more heavily than before, maybe, on this, for example, Docker and you have all these JavaScript NPM, these repositories and you’re trusting them to provide you with the same level of security that you would expect them to have. Of course, companies have come up with ways to mitigate those issues as well, by having private repositories for Docker or NPM. The same as you would have for Linux repositories, for example. And I think it’s an interesting trend, and it becomes more lucrative to actually attack the source code itself than to attack any other part of the application, because the frameworks themselves are becoming so advanced. They are not that vulnerable to basic attacks any longer, and especially when you have these continuous pipelines, it’s harder to get in between any other part of that developmental lifecycle.
Artturi: I agree, that’s definitely another important area, and I think closely related to that is again, the way software is being developed these days, where it’s very common to take components from others and utilize those as well.
Tom: Source code repositories have always been the target of attackers. Any kind of thing on the internet, software-wise or service-wise, has been the target for opportunistic attackers. So when we say supply chain attacks, doesn’t that mean basically anything that can be used by someone else on the internet? I mean, on the targeted side, when we saw the compromise of RSA for the sole purpose of getting into Lockheed Martin – in 2006, I think, I could be wrong. That was really targeted and that would fully be put into the bucket of, that was a targeted supply chain attack where one person or an organization saw the dependency between two things and said, “Okay, we can’t attack this thing directly. So let’s try a different way.” But as far as history is concerned, we had Arch Linux being completely compromised, we have packages that are being backdoored because there’s so many players as part of the process.
So I agree with you that we’re going to see a lot more of this, but I think that’s also kind of because of the fact that companies are using more cloud services. Most companies that we’ve done business for on the development side, the only infrastructure they have is the WiFi router in the corner. All the rest lives at GitHub and Azure.
Could we in fact define supply chain attacks even wider than that? I mean, we come across as a lot of companies who are choosing to trust this or that component or library, or whatever they use, because it’s being used so widely and things like that. But these are all upstream from the company point of view. So it’s all supply chain.
Artturi: The way I’d put it, I think one of the key takeaways is: The way attackers breach your organization may not be something that’s directly under your control, or something that you’ve thought of as being your responsibility. This ties into discussions like IOT as well – is an IOT device, keeping it up to date, is it the consumer’s responsibility? Or IOT in corporate environments? But yeah, companies, you know, they try and figure out what they’re responsible for, try and take care of that, but it gets much harder when there are things that may cause risks for you that you can’t actually control.
Our resident AI guy, Andy, you wanted to talk about machine learning, and I think you wanted to focus on reinforcement learning. What would in layman’s terms be reinforcement learning? Like how would you define that in a nutshell?
Andy: The process of teaching an actor to interact with its environment based on receiving rewards, depending upon the actions it takes. So at the beginning your reinforcement learning model, it doesn’t know what to do, so it guesses things and it sees what happens. And eventually it figures out that this thing is better. So then it goes over to predicting to do these things and then it ends up behaving in some way that you want it to.
So when I have my reinforcement learning algorithm learning to play a racing game, the first thing it tries is driving straight into a wall. And when that doesn’t work, it tries something else the next time and it eventually figures to stay on the road.
Andy: Actually what would happen is it would it would press the gas, it would press the brake, it would turn, it would turn, it would press the gas. It would spaz around for a very long time until it starts to figure out what is good. And then it starts learning that, oh, press the gas a lot. Oh, then it hits a wall. And then it’s like, okay, press the gas and turn, and then it skids out of control, and then it’s like okay, press the gas, now hit the brakes really hard, and then it sort of stops. And it might end up turning around and going the wrong way. But it’ll try lots of things until it figures out what’s right.
It’s being used not just for playing games, it’s being used by Facebook to determine whether it sends you a notification about something. Or also to adjust on the fly the quality of streaming video, to route packets across networks, financial trading models, or…just lots of things, people are finding uses for it.
And I guess my prediction for next year is that I think we’ll see a lot more progress in reinforcement learning. But in terms of cybersecurity, since this is what we are talking about, this year at Black Hat USA, one group showed this Deep Exploit, which is a reinforcement learning model which takes a number of different actors. They’re all running in parallel, maybe on different machines. And it trains them to do penetration testing. And then it learns which attacks work against which profiles. I think it’s still somewhat academic looking, but it’s pretty cool.
And if you could imagine, there are many other similar applications in cyber security, mostly on the penetration testing or fuzzing side that are interesting. Like password guessing, or like application fuzzing, things like that. So I would imagine that people might actually publish, even if it’s just academic, but maybe publish something that uses reinforcement learning for these sort of things.
What was interesting about 2018?
Andy: Yeah, so Cambridge Analytica and the whole big controversy about how they took some data, most of which was publicly available, and did some nasty things with it. And some of those things would have been based on the sort of data analysis techniques that are used every day, right? That are used for marketing analytics, or targeted marketing campaigns, or recommendation systems, or predictive analysis, things like this, right? So they took that data and they worked it up in a way that they could then manipulate the public, or try to manipulate the public. And I think that this is the moment where maybe more people started to understand what is possible with that data that’s mostly freely available.
Artturi: There’s the legendary case from 2012 where Target, the store chain, sent advertising on items you need when a child is born. And that’s how a teenager’s parents found out that their girl was pregnant. And that caused a huge outcry. And so, like you said, people have learned about how advertising can be targeted and things like that. But it seems like people are really, really bad at understanding that if it can be done with one type of data, it can be done with other types of data. If it can be done for one purpose, it can probably be done for another purpose. So when do people start to learn that if it happens somewhere, there’s a high likelihood it’s going to happen again?
Andy: Agree. Yeah, agree. I don’t think that any particularly advanced techniques were used for the Target thing, for figuring out when someone’s pregnant and then sending them the right kind of advertising. But that sort of stuff has been available for a long time, right? And it has been in use for a long time. But nobody really drew the dots between nefarious uses for that, or more nefarious than we already were seeing at that point. Not particularly politically motivated, or that sort of thing.
And social engineering, right? So, getting you to do something that might lead to something like being phished, or scammed, or something like that. So I guess my point is that, you know, when we get questions about like how is AI being used maliciously, I think that’s a good thing to point at. I think that’s a good thing to point at to say, “Look at what they could do with that data, think about what could be done. Think about what anyone could do if they wanted to do something malicious with that data that is mostly publicly available.”
That was our show for today. I hope you enjoyed it. Make sure you subscribe to the podcast, and you can reach us with questions and comments on Twitter @CyberSauna. Thanks for listening.
Categorieën