In mijn functie als Account Manager bij F-Secure heb ik het voordeel dat ik dagelijks te maken heb met intelligente collega’s die veel weten over onderwerpen als AI, Machine Learning en Cybersecurity breed. Het leek mij interessant om eens een aantal interviews te doen met mijn collega’s over deze onderwerpen.
Hieronder een (engelse) weergave van het gesprek dat ik onlangs voerde met Matti Aksela, hoofd van het Artificial Intelligence Center of Excellence bij F-Secure. Daar geeft hij leiding aan een team dat AI-onderzoeken en implementaties uitvoert, en samenwerkt met interne en externe belanghebbenden.
At F-Secure, Matti continues to advance his belief that AI should be used to aid the work of experts—with man and machine working together to produce results greater than either alone. Of course, this requires development of more autonomous and intelligent AI agents while refining a collaborative model that employs and expands the capabilities of both experts and AI.
AI as a buzzword, what is in your opinion the biggest difference between AI, algorithms and Machine learning? Where are the differences from the perspective of cyber security?
“There are several definitions for AI, but the way I look at it is that AI is the ”umbrella term” that covers different methods for implementing something that mimics human cognitive abilities. In this thinking machine learning is a group of methods to implement AI functionality in a way that we learn from data rather than explicitly encode what we want the outputs to be given the input – with this logic, also business rules are AI but they are not machine learning (as there someone explicitly encodes the desired output). There are many families of machine learning methods, out of which currently the perhaps most commonly talked about are neural networks and furthermore deep learning (deep neural networks) – these are models that utilize an approach based on the human brain, with multiple layers of artificial neurons that are connected to build much more complex capabilities than any single neuron could have. Here is an illustration of the terms as I see them:
As for algorithms, to me an algorithm is basically any mathematical or programmatic formula that can be used to reach a goal. Machine learning models are thus in practice implemented as algorithms but then again also e.g. data sorting algorithms are algorithms but for a quite different use. The use if algorithms as a term, sometimes even interchangeably with AI, is in my personal opinion actually quite confusing to be honest. To me the main thing is what we do in practice, and that is why I personally prefer to talk about machine learning. There are a lot of different algorithms and methods used in the field of machine learning and there is no single clearly best solution – one could say there is no silver bullet – so it definitely makes sense to keep an open mind as to what type of a model to use for a given task. So while I for example worked in the Neural Networks Research Centre of Helsinki University of Technology in the 90s and definitely have nothing against neural networks or deep learning, they can be very useful for many tasks, I don’t believe they are the only approach to look at either as there are many excellent methods e.g. in the probablsitic side of machine learning as well, and many other areas, too.
Related to cyber security, as well as most industries, I have to say that AI has become the hype term marketing wants to glue on to everything to make it sound exciting. Some say AI is in the powerpoints, ML is in the actual code – and that is actually a pretty good way of putting it. Machine Learning is a great set of techniques and methods to get value out of data, and we definitely are already using them for multiple use cases ranging from malware detection through network traffic analysis and intrusion detection, just to name a few example areas. But we need to be pragmatic and actually implement the methods and it needs to be a part of a larger security solution – that is the way we see true benefit, together with security experts and excellent core technology.”
Which treats do you see for the future if also attackers are getting infinite cloud resources and machine learning, algorithm and AI?
“This is actually quite concerning in my opinion, and one might even say it is the flipside of the ”democratization” of AI/ML (which is very positive in principle). There are more and more toolboxes and solutions available that are very easy to use and allow people to build solutions that learn very effectively from the data to solve a variety of use cases, and this is great – as long as those use cases have good intentions. I think that technology itself is not good or bad – it is all about how we use it. For example, a surgeon can use a knife to save a person’s life but a murdered might kill with the very same knife. Does it make the knife good or bad? In my opinion neither, it is all about what it is used for – and this is why we need to also understand how technology like AI can be used for also malicious purposes.
There are already several open source toolkits that utilize machine learning for e.g. penetration testing. Sure, these have been built with good intentions in mind, but what is stopping them from being applied to malicious use cases? We haven’t seen much of this yet, but I do believe it is only imminent that this will become much more prevalent since Machine Learning enables much more efficient automation and scale than could be reached otherwise. Some of the attack scenarios are already benefitting from AI technologies, for example impersonation of legitimate users voices or even video, optimized phishing messaging etc. Overall, this can empower much faster attacks – completing an intrusion in seconds rather than days, leaving practically no time for an human operator to take action – or, almost inversely, much more subtle attacks via mimicking normal behavior actively (as an AI-powered computer program will also in practice have near-infinite patience). I believe AI-enabled threats will continue to emerge, and in order to be prepared and counter these threats, also the defensive side must utilize AI/ML technologies.”
Which developments will we see in the next 2-3 years?
“There has been a lot of discussion about “singularity” and the emergence of super-human general AI and potential implications thereof, but my opinion is that we are still quite far away from that. Sure, we have seen great advances and fantastic new use cases, but this is all still more of the “narrow AI”, where we have a very specific focus area for the AI application and it can do a very good job in that particular application. For example, we have seen excellent results in image recognition allowing for applications that help medical professionals diagnose diseases much more effectively. This is also a great example of what is sometimes called augmented intelligence – we are using AI as something that helps people do their jobs better. And there is nothing wrong with this at all – it is a fantastic thing and we can make our lives easier and work more effectively etc. I see this still being the main direction going ahead – we will build even better narrow AI solutions as we have both more data available, more computational power and improved algorithms to drive the applications. We will also continue to look for ways to go towards general intelligence, but my honest opinion is that it will be more likely that I will be sipping coffee on the porch due to having reached retirement age rather than having had my job taken over by general AI.
What will the near future bring then? I tend to use an example of flight when we think about the current direction of AI developement. When mankind was envisioning that we want to fly, we wanted to mimic how flight happens in nature and one can find patents for a flying device which essentially looks like attaching wings to a human being – of course that’s not the usual way of flying nowadays. We developed something that utilizes the strengths of the technology we have been able to build – and I believe the same will continue to apply to AI. Computers are much faster at computations, have much more memory and communications bandwidth than humans – and when thinking about how to advance AI, I believe taking those strengths as the starting point makes a lot of sense instead of trying to mimic human intelligence directly. This would mean more connected solutions that exchange information and collaborate – but already have capabilities of their own as well. I sometimes refer this direction to collaborative swarms of intelligent agents, and actually we here at F-Secure are working towards solutions like this in the cyber security space in the scope of our Project Blackfin. This is an exciting direction we see as taking our industry truly forward in the utilization of artificial intelligence and the first generation of the Project Blackfin technology is already available in our Rapid Detection and Response product!”