F-Secure is participating in an EU-funded Horizon 2020 project codenamed SHERPA (as mentioned in a previous blog post). F-Secure is one of eleven partners in the consortium. The project aims to develop an understanding of how machine learning will be used in society in the future, what ethical issues may arise, and how those issues might be addressed.
One of the initial aims of the project was to develop an understanding of how machine learning is being used at present, and extrapolate that baseline into a series of potential future scenarios (a look at what things might be like in the year 2025). Some of the scenarios are already online (and make for some interesting reading). Examples include the use of machine learning in education, policing, warfare, human assistance, and transport. Some of the other SHERPA deliverables, such as a series of case studies, have also already been published.
F-Secure are the technical partner in this project and, as such, we are there to provide technical advice to the other partners (such as explanations on how machine learning methodologies work, what can and can’t be done with these methodologies, and how they might be improved or innovated on in the future). As part of this project, we also aim to propose technical solutions to address some of the ethical concerns that are discovered.
One of F-Secure’s first tasks in this project was to conduct a study of security issues, dangers, and implications of the use of data analytics and artificial intelligence, which included assessing applications in the cyber security domain. The research project primarily examined:
- ways in which machine learning systems are commonly mis-implemented (and recommendations on how to prevent this from happening)
- ways in which machine learning models and algorithms can be adversarially attacked (and mitigations against such attacks)
- how artificial intelligence and data analysis methodologies might be used for malicious purposes
The output of this task was a report that was written in collaboration with our partners. The document covers both technical and ethical implications of machine learning technologies and uses as they exist today, with some minor extrapolation into the future. The full document can be found here. It is quite a lengthy read, so we’ve decided to post a short series of articles that contain excerpts. There are four articles in this series (in addition to this introduction). Each article covers a different section in the final report. We’ve opted to keep this series technical, so if you’re interested in reading the ethical findings, you can find them in the original document. The articles cover the following topics:
- Bad AI
- This article details the types of flaws that can arise when developing machine learning models. It includes some advice on how to avoid introducing flaws while developing your own models.
- Malicious use of AI
- This article explores how machine-learning techniques and services that utilize machine learning techniques might be used for malicious purposes.
- Adversarial attacks against AI
- This article explains how attacks against machine learning models work, and provides a number of interesting examples of potential attacks against systems that utilize machine learning methodologies.
- Mitigations against adversarial attacks
- This article explores currently proposed methods for hardening machine learning systems against adversarial attacks.
Each article contains a link to the next, so if you wish to read the series in sequence, just follow the link at the end of this article. If one particular topic is of interest to you, just follow one of the links above. Of course, you can also download the entire document for reading offline by going here. Enjoy!
Leave a comment