Skip to content

Trending tags

Episode 11| The Rise of AI and Deliberate Deception

Melissa Michael

02.08.18 29 min. read

Tags:

LISTEN TO EPISODE 11

Disinformation. Fake news. Social media manipulation. Lately another dark side of the internet has come into focus – its use as a tool for deliberate deception. Technologies like machine learning and artificial intelligence are being employed to play hoaxes and mislead on purpose. Seeing is no longer believing – and moving forward, it’s only going to get harder to distinguish facts from falsehoods.

Andy Patel from F-Secure’s Artificial Intelligence Center of Excellence has been studying this phenomenon, and for Episode 11 of Cyber Security Sauna, he stopped by to share what he’s learned about Twitter bots, deepfakes, voice cloning and the tools that make it all possible. Do concerns about these technologies outweigh the benefits, and what’s the affect on society? Listen or read on to hear Andy’s thoughts.

Janne: Welcome, Andy.

Andy: Thanks.

So what is the Artificial Intelligence Center of Excellence, what do you guys do?

We combine expertise in cyber security with expertise in machine learning and artificial intelligence techniques.

Okay, everybody and their brother is talking about artificial intelligence and machine learning, what makes ours different?

That’s about the cyber security expertise. So we combine our threat intelligence and our knowledge of how attackers work. We’re doing everything from data science, looking for patterns, anomalies, things like that in the data that we have, and also creating models, not only to be used in our backend or to be used to protect customers, but also as tooling for our own people, so they can spend more of their time on being creative and doing more interesting work. We’ve been using machine learning techniques for like, the last decade.

Andy Patel, F-Secure

Andy Patel (photo credit: Michael Sandelson)

So we’re proud to say we’ve been doing it before it was cool.

Yes, yes. We were doing it before anyone was using the word artificial intelligence to describe it.

Your research has focused on Twitter. What got you started down that path?

I played around with the Twitter API as a project with a colleague. We were looking to do some sentiment analysis on tweets. It was a neat way to learn Python. And then I didn’t do anything with it for a few years, and then a colleague of mine had this idea of looking at Donald Trump’s tweets. He had a theory that Donald Trump at the time was sending his own tweets from an Android phone, but that his staff were using an iPhone. And this was not that long after he had become president. We pulled the last 3,200 tweets, which the API allows you to do –

That’s about an afternoon’s worth.

Right, yeah. (laughing) And we looked at the source field, which tells you what agent published the tweet. And we actually did plot out graphs of what time of the day he typically tweets, and what device was being used, and whether his Twitter behavior had changed prior to the election and after the election and these sort of things. So I built a little tool, we did all the visualizations in Excel, and it was all very simple but it was kind of fun. And after that I started to play around more with the API and started creating tools and stuff, and sort of went from there.

So had his behavior changed?

In terms of the device that he was using, yes it had. And it was pretty obvious that someone had taken his phone away from him at some point. But I think he got it back now.

Okay, leaving aside Donald Trump, what else have you been finding about Twitter?

Since then, I’ve written a whole bunch of tools and I’ve been trying to programmatically determine things like if a user is a bot or if certain tweets or trends are being amplified artificially, if someone has purchased followers, and looking also for scams being propagated across the Twittersphere.

How do you detect something like that?

You can attach to a Twitter stream with some search terms, and it will provide you with a tweet object for every tweet that matches those search terms. So if I start my script off with the word Trump, it will actually return me all tweets that contain the word Trump in them or are mentioning Donald Trump. It turns out to be about 50 tweets a second if you attach to a stream like that, which is fairly fast. And these Twitter objects contain lots of data, so they contain the tweet itself, the text, when it was tweeted, was it a retweet. It also contains the language field, the source, what device was used to tweet, and it contains the information about the user as well, what that user’s name is, when their account was created, how many followers they have, how many people they follow, how many tweets they’ve published, how many likes they have, and lots and lots of data like this.

There’s actually sort of more data within the data, so by just saving this stream of objects, you can get a timeline of tweets that have happened, and you can also start looking at interactions like who’s retweeting whom, what subjects are popular, what hashtags are being used the most, things like that. And you can actually get a lot of neat information out of these things.

How are bots different from actual human beings?

Well, there actually are a number of different things that we could call bots. Bots could be, for instance, an account that automatically tweets. And this isn’t necessarily against Twitter’s terms of service. For instance, a company might set up a scheduling service to tweet marketing-related tweets a few times a day. Or news outlets might retweet stories they have published that day. So that’s one type of bot.

Another type of bot is multiple accounts that are controlled by a single entity. So someone will set up or buy multiple Twitter accounts and then use a piece of software to control those accounts. To have those accounts follow other accounts, to have those accounts retweet other accounts, to have those accounts tweet stuff. Those kinds of bots can be used, for instance, to amplify something by retweeting it across thousands of different accounts or boost someone’s follower count, things like this.

Yeah, that’s the kind of bots I think we’re talking about. So how is that behavior different from a human being?

What it’s trying to do actually is to emulate human beings as much as possible. So what the owner of these bots is trying to do is make it look like there are thousands of people interested in something or for some cause or something like that. And so these bots are trying to sort of fly under the radar by looking as real as possible. They may have set up the accounts with an avatar, an actual picture of someone, written a bio, followed some people, gotten some followers, liked some tweets, and they try and make it look like it’s a regular user, so they tweet about different things that normal people might tweet about, such that when they actually are used for something else it doesn’t stand out, so if you look at the account you might think it’s a regular person. And that’s what makes it very difficult to find these things.

So how do you?

Well, what you need to do is find something suspicious looking and then start analyzing it to find out whether your suspicions were correct or not. So for instance, if you see an account that has not that many followers suddenly get retweeted by thousands of accounts, you can then look at when did that happen, like if you plot the number of retweets over time you can see a pattern there, that might be indicative of a mass retweeting over a short period of time, or it could be that the people controlling the bots actually are careful, and they have the retweets come over a day or two, and then it might look quite organic.

Generally speaking we’re just looking for things that would trigger a further analysis of the situation. And then of course after that you’re gonna look at the accounts that did retweet that tweet. What relationship do they have to that person, do they look like regular accounts, are they following each other, all kinds of things may be indicative of a herd of bots that’s controlled by a single entity.

I guess when you find that herd of bots that is an amplifying chamber for a specific message, you then can take everything these bots are retweeting with a grain of salt.

Yeah, I mean what you might want to do from there is to look at what they continue to do and see where else they’re being used. Actually there are these Twitter marketing services, and I don’t know whether these are really within Twitter’s terms of service, but you can actually go to a website and buy retweets. So people may want their product to be sold or something, and then they’ll have a bunch of bots retweet. And of course these bots themselves aren’t probably being followed by anyone of any relevance. But the fact that something got retweeted a lot may push its chances of you seeing it up.

Twitter does have these quality filters, and they’re actually pretty good. So if you get one of your tweets liked or retweeted by something that’s dubious-looking, you might not even get a notification about that. It’s suppressed. So Twitter is actually attacking this problem not by, well by sometimes suspending accounts, but often it’s the quality filter that will actually make all of this stuff invisible to you.

I actually ran a script against the Garden Hose, which is like a 1% sample of all tweets. It’s about 60 tweets a second. And if you run that for even a few minutes, you’ll actually see the top hashtags coming up are advertising Turkish-based escort services. And I did it again recently and it’s still the same case. I mean, these are still the top hashtags on the Twitter Garden Hose. And yet you’ve probably never seen those things being advertised on Twitter.

Can’t say I have.

So the quality filter is suppressing all that stuff. All that background noise that’s going  on.

Wow. What kind of tools are available to someone wanting to run for example a misinformation campaign?

Ah yes so this is interesting. I mean, I was surprised about this, but a colleague of mine, external collaborationwise, Erin Gallagher, she found a piece of software called Tweet Attacks Pro, and it is a really good piece of software. It allows you to control multiple Twitter accounts. It doesn’t require API keys, it uses proxies so Twitter can’t tell all these accounts are signing on from the same IP address, and it allows you to automatically do things like follow back and retweet, and it’s a very professional tool, and I would imagine that that’s a tool that the professional bot herders are probably using.

And another external colleague of mine, Geoff Golberg, talked to a guy recently that we had figured out had bought a lot of followers and that guy admitted to using Tweet Attacks Pro. And it was just a regular guy actually. It wasn’t like a bot herder or anything like that. But I guess he was trying to make sure that his purchased followers didn’t get suspended, by making them look like active users and so he was using Tweet Attacks Pro to sort of keep them fresh.

So this was just a regular person who wanted his tweets to be more popular than they were being.

I guess he wanted his follower account to be at a certain level because there are restrictions to how many people you can follow yourself, and you need to have the ratio between how many people you follow and how many people are following you needs to be a certain way, otherwise you can’t follow more people. And there are these accounts that you can see on the internet, where they follow like 80,000 people, and they’re followed back by 80,000 people, so basically they sort of follow each other, these large groups of like follow back things.

And these guys are using this phenomena to game these services attached to social networks that sort of rate you as to how much influence you have and how good you are at Twitter, and if your name is high up on that list then people pay attention to you. And I think these people are using it to sell themselves for speaking gigs, or to make themselves look more expert than they actually are. And these people are just self-promoting each other, they are just constantly mentioning each other back and forth and these tweets where you see huge amounts of ad mentions, like 40 or 50 ad mentions with one word, like “Check this” or something like that, and just going back and forth. These are actually gaming those online systems that give you credibility, and I guess companies are looking at those things to find out who’s credible on social networks and then making decisions based on that. I don’t know, it’s all very mind-blowing, to be honest.

photo credit: Michael Sandelson

Absolutely. This is like a whole world.

Yeah, it is. But then getting back to what tools are available, that Tweet Attacks Pro is one of a huge amount of tools that exist, and again they look like fairly professionally made tools for doing all sorts of things.

Who makes these things?

Here I’m looking at a website called White Hat Box, but it just seems to be a place where they’ve pulled all the tools into one place where you can find them all and download them, but it’s probably separate people making these. You can see stuff for YouTube comments, of course for Facebook, Instagram, all of those platforms there’s similar tools.

Is there anything to boost your podcast ratings?

We need to find that, don’t we? (laughing) On top of that there’s tools that allow you to scrape content from the internet, if you would want to repurpose it, for instance if you would want to create a site that looks like another site with the same stories. These sites that look like real news sites or real tech news sites, but they’re actually sort of just reposting content from other sites, so things that allow you to scrape that content and repost it.

There’s also a really interesting tool called Spinner Chief, and this tool allows you to take a piece of written text, and what it does is it substitutes words and phrases for similar words and phrases so it can create thousands of different sentences that basically say exactly the same thing. So if you want to leave loads of comments in a forum or a comment section or something, you can use this to create stuff that looks different but actually reads the same, you know what I mean? So it uses a thesaurus to switch words and phrases to others. And it will also do it on the fly, do it in different languages for you and you can have it create you know, X amount of things and push them out into different sites and different forums and stuff like that, for you automatically. So you can basically like game forums or comment sections on sites or on videos and things like that.

We’ve got Spinner Chief for generating realistic text, but I understand there are ways to create realistic fake videos and audio as well. Tell me about those.

Yeah, so you’ve probably by now heard of deepfakes.

Absolutely.

Deepfakes is a technique for replacing someone’s face in a video. It uses machine learning techniques, it uses two variational autoencoders to take the video and the person’s face in the video and a face that you want to replace it with, and then it iteratively goes through the frames in the video and replaces that person’s face and  what you have in the end is a video where you have someone else in that position on the video. And deepfakes was a subreddit where there was a lot of people doing this, and it got shut down because people were using it to put faces on porn videos and that was objected to. It was also used for instance to put Nicholas Cage’s face on many things, which was very funny. (laughing) It’s pretty hilarious.

To be honest, looking at those videos, they’re not foolproof, but then someone come along and made a nice little Twitter script (I think it was a blog post I saw on Hacker Noon) which is able to take, you point it to a couple of YouTube videos, and it is able to take the faces from those YouTube videos. So whereas you needed like 300-ish minimum pictures of the face that you wanted to replace the one that was on the video, that script could actually automatically grab those faces for you from YouTube videos, I think it was just a couple of 3-minute clips or something, if it had enough of that person in there, and it would pull like 20,000 faces from those videos, and it would actually do a really good job. And his blog post showed an example of where he had put one comedian’s face on another, and it actually looked pretty good. And that was maybe last year.

And just recently I saw a new paper or a new piece of research where they are now able to change the actual expression on the face and it’s a piece of research that has been done for the purposes of dubbing films. When you dub a film you’ve got this problem that the person speaking, their mouth is not moving the same way as the new language, so it always looks a bit funny. But they’re trying to find a way of using this technique to actually manipulate the face of the actor such that when they’re speaking dubbed text, their mouth moves properly. So they actually take the audio and use that to generate the mouth movements. So of course what they were able to do was also change the way that these faked videos were working. So you could put someone else’s face onto someone else’s and them have them actually make the lip movements alongside something spoken. So you could actually put words into someone’s mouth and put their face onto someone else’s at the same time. And it’s actually very good, very much better than deepfakes was, very much more realistic.

There’s a similar technique for audio as well.

For audio, there are things like Lyrebird, and Lyrebird takes samples of someone’s speech so anyone can actually go to the site and try out the beta, and it presents you with some sentences and you read them out, and then it actually generates a representation of your own voice, and you can just type stuff in, and it’ll say it in your own voice. To be honest it still sounds a little bit computer generated, but this is just the first example of this, and the demos they have of Obama and Trump are quite realistic sounding. You can definitely tell that it’s been generated but think about it in a year or two.

Yeah, like you said this is new technology right now but in a couple of years it’s going to be pretty amazing.

Yeah, so I mean you could essentially then combine, you could find a video that you want to put someone in, find a video of that person, have their face put on the other person’s, have the voice generated, and it might look eventually quite seamless, and you wouldn’t be able to tell it from reality.

Yeah, you mentioned dubbing before, I’m wondering if you could get the sounds of a foreign language, sort of have a person speak a language that they don’t actually speak.

Yeah, you could.

So that’ll put the dubbing voice actors out of work.

It might too. yeah, that’s true. You could get a famous actor’s voice and then have it speak a different language.

Sort of have the actor dub themselves.

Yeah.

So technologies like this are used for example for dubbing, so they have legitimate uses, but also not so legitimate uses. Do concerns about these technologies outweigh the benefits?

I can understand people’s concerns with these, because this is the first time we have seen this kind of manipulation done so easily. I mean obviously it’s been possible to do these things in a professional studio with a lot of time and effort, so it’s not like this is the first time we’ve ever seen someone misrepresented on camera, although most likely it’s been done in the context of making a film or something like that. So I think what’s worrying people is how easy these things are to use now, and the fact that now you can download an app that will do your deepfake things for you, so you don’t even have to have any knowledge of variational autoencoders to be able to do this. And Lyrebird is a service on the internet ,and it’s neat, I think it’s really good and obviously, these things are going to happen. These things are going to come about by virtue of this new technology emerging.

But we have sayings like seeing is believing, but we’re getting into a world where a couple of years down the line you’re literally not going to be able to trust your own eyes. Anyone can create a video where anything happens with anyone’s face and anyone’s voice.

Yeah. And so we may get into this moment where everyone’s skeptical of everything. I mean, we’re sort of already moving in that direction. The term “fake news” is relatively new and everyone knows what it means now, or at least everyone thinks they know what it means. And people are starting to call everything fake news and people are starting to believe selectively what they want to believe. And yeah this is probably going to propagate that in more areas than just written texts or social media posts and things like that.

So what’s that going to do to society when anyone can create fakes of anyone saying or doing anything?

Well it’s obviously going to be more confusing, isn’t it. Like if you take for instance the shooting incident that happened in Las Vegas last year, just one example of many. As that was going on there were actually people posting things on Twitter which weren’t true. There were a lot of things going on on social media and people couldn’t tell what was real and what wasn’t. There was a picture of a comedian and people always post it as this is the suspect. Every time there’s a shooting it’s the same picture of the same guy, and it’s sort of a meme at this point. But you know, some people took it seriously and started retweeting it, like oh hey here’s the picture of the suspect, not knowing that this is the same guy that these people always put out there as soon as these things happen, but there were also people saying like oh, you know, a friend of mine was here or whatever and they weren’t.

And so I think when there’s that kind of confusion going on in that kind of situation, people are trying to grab on to what’s actually happening, and obviously that can turn out to be even dangerous in some cases. But it certainly doesn’t help getting the facts out to people who need them. If the whole world ends up getting to the point where everyone is skeptical about whether they should publish a news story about something because they’re not sure whether it was generated or not, then yeah, that’s going to create a great deal of confusion, isn’t it.

Yeah, I mean news stories you want to get out there as soon as possible, so maybe you don’t always have time to verify everything you read.

Yeah, so I was talking to a colleague this morning about this, Sean, and he told me that news agencies have had a trend towards not having as many foreign correspondents nowadays, and that they kind of go with is it citizen reporters who take their phone and video what’s going on. And the idea of having sort of authoritative sources, having like a video stream that’s somehow signed or fingerprinted as coming from an authoritative source, or having a recorded piece like a video or something that is signed in some way, like we would sign a binary, or we would have a TLS certificate for websites right, the same idea, but then that doesn’t work well with the idea of citizen journalism. Because you’ve got a lot of people, you just need to find the person who was there and who got the video of whatever you were looking for, you know.

So then if someone can intercept that or get into that process then they could actually present something that the news agencies were thinking is actually genuine and it turns out not to be. And that might then change the way that they work with these things. They may go back to having actual correspondents in locations with signed video streams or something like that, if that’s the way that this is approached, if that’s how people try to solve this.

News aside, surely no one is basing real life decisions on random tweets.

Right, but that has happened, right? Was it in 2013 or something that the Associated Press Twitter account got hacked, and it tweeted that there was an explosion at the White House, and the stock market took a dive, and I think it was a lot to do with trading algorithms that had somehow picked up on this news, and it did recover, but it was actually a fairly significant dive at that moment when that happened.

And that was just a tweet, you know that was just one tweet, 160 characters of text. So you can imagine that if someone would manage to get enough people believing in something very scandalous that was generated, and push the narrative to a point where once it was found out that it was fake, there had already been a shift in what people were thinking about a certain subject, right. And we kind of see these sort of things happening already to a certain extent. But these could be actually quite damaging, they could be used very offensively in the right hands.

Yeah, there was an internet saying that what has been seen cannot be unseen, and it’s sometimes hard to disprove things even after we know they’re not true.

Yeah, and there’s also a certain percentage of people who are going to hang onto something regardless of whether it was proven to be true or false. That’s always going to happen, yeah.

And I guess the more shocking something is, the more likely people are to share it.

Yeah, I mean this doesn’t just go with what we’ve been talking about with news or the press, but when people see stuff on social networks, some people have a tendency to share sensational things, and sensational headlines without even having clicked on the article. That’s something that we’ve heard a lot about people doing, sharing that stuff on Facebook with regards to the US elections and things like that. So yeah, it’s enough to make something shocking and have it go viral without anyone having vetted it at all.

Yeah, I personally know some people who are sharing too many articles to have read all of them thoroughly.

Yeah, they literally just share the headlines. And if it’s a little video clip that fits into a tweet and lasts 30 seconds people will definitely share it, won’t they?

What’s the future of misinformation going to look like?

I mean, you can just sort of extrapolate from what we’ve been talking about already. For instance, if you look at generative adversarial networks, they’ve been used to generate pictures of people’s faces. But these are not real people. These are people’s faces that have been generated from a collection of different faces. And if you look at the progress that’s been made in that area over the last four or five years it’s quite astounding. Nowadays they’re generating pictures that are photo realistic. Last year they were blurry, pixely, not very good looking things.

So for instance that sort of a technique could be used to generate fake social media personas. Right now if someone wants to generate a real-looking Twitter account, they’d need to put a photo on it, they’d need to put someone’s photo on it, or sometimes put a photo of a dog on there or something, but they need to put something on there that makes it look like it’s started by a real person. And you can reverse image search those images in some cases and see that oh, this account was created using an image that’s also been used in these other accounts. And then you can see that okay, someone’s using that image to create accounts that look like real people. But when you can just generate a person’s face it’s going to be unique, and you can do that as many times as you want, and you can generate these images that can’t be reversed, so you’re not going to be able to tell if it’s a real person or not.

Extrapolating on deepfakes and what we talked about with being able to manipulate the person’s face and the person’s mouth and have them speak a different language, you know, those techniques are only going to get better, then they’re maybe going to be able to change other things about that video. Maybe they can make you look younger or older or change your hair or put a beard on you or things like that, or the videos themselves will get more realistic, the amount of time it takes to do this will go down, because processing power will be getting better, and the techniques will be getting better.

Lyrebird and things like that will be able to generate much more realistic-sounding voices that aren’t completely monotonous, that can get excited, or get angry, or things like this. When it comes to things like this Spinner Chief, this Spinner Chief is very simple. You take a paragraph and then it generates lots of different versions of that paragraph based on just replacing words or phrases. But you can still relatively easily determine that that’s happened. You can use regular expressions to find those patterns and to find all of the auto-generated paragraphs that match that replace functionality that they used.

But going into the future, you should be able to use recurrent neural networks to basically generate text that like, mimics a certain writing style on a certain topic, and then auto generate content on that. I’ve already done that myself. I’ve already taken posts from the World of Warcraft forums, trained a recurrent neural network to spit out text that looks like someone posting on the World of Warcraft forums. I’ve generated a chatbot that took all the posts and replies and it basically argues with each other about like rogues being nerfed and stuff like that.

And I’ve read a blog post from someone who used the same technique to make himself a bit of money making fake, like TripAdvisor reviews where it just auto generated restaurant reviews and he just submitted them and got paid like 10 dollars a review. Because there are companies who – companies, quote – who are paying for people to write fake reviews of restaurants or hotels and things like that.

So these things are already being used, it’s fairly straightforward. If you go get yourself an account on Amazon, you can get ahold of pretty decent hardware that can run these models for not that much, really. You don’t have to buy yourself a big expensive box with a bunch of video cards in it just to do it nowadays. So it’s within the grasp of people who are willing to spend some time with Python and Tensorflow or Keras or something like that, and it’s not that long before there’ll be a tool that you can just, you know, click a button to do it, and point it at some text and say hey, generate me something that looks like this.

So in terms of cyber security, this is going to make it easier to social engineer people, phishbait them, things like that?

Yeah, that’s true. And in fact there was a group that did a study on phishing people on Twitter, using tweets that were generated I think with the recurrent neural network, such that the language of those tweets was much more realistic, much more human and not so auto generated looking. And in fact the results of that piece of research were that people were quite a lot more likely to fall for that sort of a phishing campaign than the standard badly written search/replace stuff that was going on already.

So yeah, you could eventually imagine that someone could generate a …Well, you could have a computer write an article about something current events related that makes sense, that is plausible, that obviously didn’t happen, and it reads in perfect English, and you can just have it post that. I would say that that is definitely within reach, yes.

Well this has been absolutely fascinating. Thank you Andy for sharing your thoughts with us.

Thank you for having me.

LISTEN TO EPISODE 11

Cyber Security Sauna podcast

Melissa Michael

02.08.18 29 min. read

Categories

Tags

Highlighted article

Related posts

Close

Newsletter modal

Thank you for your interest towards F-Secure newsletter. You will shortly get an email to confirm the subscription.

Gated Content modal

Congratulations – You can now access the content by clicking the button below.