A new study from SHERPA – an EU-funded project researching the ethical and human rights implications of AI and big data – emphasize that AI capabilities are spreading. And it’s not just organizations that use it. AI research is freely available online, which means can easily be used for either benign and malicious purposes.
But while there are clear signs that threat actors are developing these capabilities and could use them in cyber attacks, F-Secure’s Andy Patel says that they’re likely used for data analysis and not weaponized for direct use against targets. At least for the time being.
“If machine learning techniques are being used by criminal organizations or nation states for malicious purposes, they’re probably used mainly for data analysis,” says Andy, a researcher with F-Secure’s Artificial Intelligence Center of Excellence. “A few interesting proof-of-concepts have surfaced over the past year that illustrate how machine learning techniques might be used to perform penetration testing, or provide added anti-reverse-engineering functionality to a malicious executable. One might imagine that proof of concepts such as these may get improved, innovated on, and eventually make their way into attackers’ toolkits.”
The study identifies several different paths for AI-based attacks to develop. Many of them are based on existing research and technologies. Some are already being used, although the purposes remain largely unknown.
It might surprise some people to hear that the most pressing threat posed by attackers working with AI is the production of fake content. ‘Fake news’ ´has been in the headlines for years (more on that below). But there are far more applications for fake content. And AI is unequivocally capable of producing fake content that can fool both man and machine.
“At the moment, our ability to create convincing fake content is far more sophisticated and advanced than our ability to detect it. And AI is helping us get better at fabricating audio, video, and images, which will only make disinformation and fake content more sophisticated and harder to detect,” says Andy. “And there’s many different applications for convincing, fake content, so I expect it may end up becoming problematic.”
There’s no shortage of examples of realistic, AI-fabricated content. Several are discussed in the study. Lyrebird.ai, DeepFakes, pix2pix, CycleGan, and OpenAI’s GPT-2, are all notable AI techniques and services for generating fake content. One interesting case referenced in the study was a fake Twitter profile (pictured below) that researchers believe was created using AI.
This type of fake content has applications that go beyond disinformation (although that’s an almost certain use case). According to the study, cyber security researchers have already developed a proof-of-concept for AI capable of autonomously creating phishing messages. It’s not suitable for use in real attacks, but is more of a first step toward completely automated, end-to-end, AI-powered phishing. The study speculates similar capabilities could be developed for spam campaigns.
One of AIs greatest strengths is automatically performing tasks. Data processing is a great example of this. Combing through large amounts of data is tedious. Fortunately, AI isn’t evolved enough to experience boredom, making it perfect for such tedious, detail-oriented work.
According to the study, intelligent automation will augment attacker’s current capabilities. It’ll essentially elevate their game to new heights by giving them the “big data” advantage. Potential future applications of intelligent automation identified in the study include:
- Botnets that automatically identify new targets for spam campaigns or DDoS attacks
- “Intelligent” malware that creates customized payloads after infecting and performing reconnaissance on the target
- Using AI in backends to deliver payloads meant to evade the target’s detection mechanisms (some say this is already occurring)
- End-to-end fake news and disinformation campaigns where a malicious AI model creates an entire strategy to manipulate an existing AI system
Disinformation and fake news
Fake news is already happening. The study cites numerous examples of disinformation campaigns across the globe. It’s a regular topic of research at F-Secure (you can read numerous stories about Twitter research on F-Secure’s News from the Labs blog).
However, the study points out that as AI advances, these disinformation campaigns will become far more sophisticated and damaging:
“…if more complex algorithms (for instance, based on reinforcement learning) were to be used to drive these systems, they may end up creating optimization loops for human behaviour, in which the recommender observes the current state of each target and keeps tuning the information that is fed to them, until the algorithm starts observing the opinions and behaviours it wants to see. In essence the system will attempt to optimize its users.”
Specific ways AI could socially engineer users’ tastes, beliefs and behaviors, including:
- Limit what a user sees based on an assessment of the user’s identity, preferences, etc. (already occurring to a certain extent)
- Discourage users from posting/sharing content by only exposing said content to users that will express disapproval of it
- Similarly, the algorithm can share content with only those who will express approval of it, encouraging that user to share similar content more often
- Trap users in a bubble by preventing users from gaining exposure to divergent or contradictory views
- Can track changes in views based on interactions, and start promoting or even producing content toward those ends
Where are AI threats heading?
Nobody can predict the future. There’s no AI or human intelligence for that. But a common thread amongst all of these potential paths AI attacks can develop along is the idea that AI will assume most of the work currently done by human attackers. We may eventually see machine learning models developed and then monetized by cyber criminals in the form of “cyber crime as a service” businesses.
But only time will tell.
Leave a comment