Skip to content

Trending tags

A security self-assessment questionnaire for machine learning-based systems

Andrew Patel

08.12.21 10 min. read

Adversarial attacks against machine learning systems

As the use of machine learning models in everyday applications grows, it is important to consider the resilience of machine learning-based systems to attacks. In addition to traditional security vulnerabilities, machine learning systems expose weaknesses of new types, such as those associated with links between machine learning models and the data they use during training and inference. Several such weaknesses have already been discovered and exploited by adversarial machine learning attacks, including:

  • Model poisoning: An attack whereby an adversary maliciously injects training data or modifies the training data or training logic of a machine learning model. This attack compromises the integrity of the model, reducing the correctness and/or confidence of its predictions for all inputs (denial-of-service attacks) or for selected inputs (backdoor attacks).
  • Model evasion: An attack whereby an adversary maliciously selects or constructs inputs to be sent to a machine learning model at inference time. This attack succeeds if the attacker’s inputs receive incorrect or low-confidence predictions from the targeted model.
  • Model stealing: An attack whereby an adversary builds a copy of a victim’s machine learning model by querying it and using those queries and resulting predictions to train a surrogate model. This attack results in compromising the confidentiality and intellectual property of the victim’s machine learning model.
  • Training data inference: An attack whereby an attacker infers characteristics or reconstructs parts of the data used to train a machine learning model (model inversion and attribute inference) or verifies whether specific data were used during training (membership inference). This attack relies either on querying the target model and analysing its predictions or on reverse-engineering the model. This attack compromises the confidentiality of the data used to train the model.

Given the range of adversarial attacks already available against machine learning models, it is important to understand and manage the security risks and potential impacts that would result from such attacks.

Challenges in machine learning security assessment

Assessing security risks associated with machine learning systems is not straightforward. Multidisciplinary technical skills and knowledge in emerging technologies as well as an understanding of the ecosystem in which the machine learning-based system operates are required to perform successful security assessments. The impact of a compromise based on the usage of the machine learning-based system and of the decisions it makes must be first understood. Next, it is important to (i) identify security threats, (ii) discover system vulnerabilities and (iii) understand how these vulnerabilities can be exploited by adversarial machine learning attacks. Finally, knowledge of how defence mechanisms can mitigate threats and vulnerabilities against a system must be gained. (Of course, traditional security vulnerabilities and attacks must be analysed as well.)

Performing these tasks is challenging due to three main factors:

  • Awareness about vulnerabilities and attacks specific to machine learning-based systems is currently low.
  • Little is understood about the types of attack vectors that lead to the exploitation of vulnerabilities in machine learning models, especially those that are integrated into larger systems.
  • Availability of experts with a deep understanding of both security and machine learning is limited.

A solution to self-assess the security of machine learning-based systems

To partially address these challenges, we have created three questionnaires designed to assist machine learning practitioners, security experts, and decision makers in this risk assessment process. These questionnaires are designed to help respondents develop an understanding of the security risks associated with different types of machine learning-based systems and reason about vulnerabilities and possible attacks against their own machine learning-based systems. The posed questions also hint at measures that can be adopted to patch vulnerabilities and prevent attacks. We have provided three questionnaires, each targeted towards experts in specific domains.

  1. Risk and impact assessment questionnaire– designed to assess how well the respondent is managing security risks associated with their machine learning-based systems. It analyses the respondent’s approach to threat analysis and impact assessment both in a generic context and when considering security threats specific to machine learning systems.
  2. Attack surface and vulnerabilities questionnaire– designed to help identify the attack surface of the respondent’s machine learning-based system and to discover its potential vulnerabilities at various stages of its lifecycle.
  3. Security of your machine learning system questionnaire– designed to assess how secure and robust the respondent’s machine learning-based systems are. It explores whether the respondent has implemented processes or techniques to discover vulnerabilities in their systems and to mitigate against attacks.

Each questionnaire is designed to be answered individually. Answering all three questionnaires provides a complete picture of the security status of the assessed system. Respondents can choose to remain anonymous and provide no personal information while answering. The goals of these questionnaires are as follows:

  1. To raise awareness about security threats against machine learning-based systems.
  2. To help machine learning practitioners assess the security of their own machine learning-based systems.
  3. To share solutions and practices that can increase the security of machine learning-based systems.
  4. To infer global trends about the current state of machine learning-based system security. Respondents who leave contact details will get early access to a report that will eventually be made public.

All three questionnaires can be accessed at the following link: These questionnaires are completely anonymous and F-Secure does not collect any personal data about the respondents. The responses will be combined and summarised to infer global trends. No individual answer provided by a single respondent will be divulged.

The following sections describe each of the three questionnaires in more detail.

1. Risk and impact assessment questionnaire

To grasp the importance of machine learning security, it is first crucial to understand the security risks associated with a machine learning-based system on a high level. This questionnaire primarily aims to raise awareness about the consequences of a security incident on the overall ecosystem in which the machine learning-based system is used. Second, it aims to help respondents prepare for such incidents. Presented questions explore the application domain of machine learning-based systems and how their predictions or recommendations are used. Questions are also designed to investigate processes designed to monitor and manage security risks. A set of questions targeted at risk assessment attempt to identify processes in place to manage risk such as:

  • Threat analysis exercises.
  • Identification and monitoring of vulnerabilities.
  • Use of metrics to quantify and monitor security risks.
  • Classification and ranking of different risk factors.
  • Response procedures to mitigate security risks.

Additional questions focus on understanding impact in the case of a successful attack, based on factors including:

  • The application domain in which the machine learning-based system is used.
  • The damage that could be caused by a successful attack to the business, users, society, etc.
  • The type of adversarial machine learning attack that the system might be vulnerable to.
  • Assets that might be compromised during an attack (e.g., the machine learning model or the data it is trained with).

The targeted respondents for this questionnaire are people who understand the business usage of their machine learning models in a wider context and who understand generic security concepts and risk management. We recommend this questionnaire be answered by a member of upper management or by a legal or risk management expert. The questionnaire is available at the following link:

2. Attack surface and vulnerabilities questionnaire

The attack surface exposed by a machine learning-based system is what enables attack vectors against it. This questionnaire reviews design, implementation, and deployment choices for machine learning-based systems in order to uncover weaknesses to adversarial machine learning attacks. The ease at which adversarial machine learning attacks can be performed against a system depend on choices made by its designer of a system. This questionnaire analyses attack vectors against machine learning-based systems according to the five stages of the machine learning model lifecycle.

  1. Design & Implementation: Choice and implementation of the model, training algorithm, optimization method, and definition of input features.
  2. Data collection: Gathering, labelling, and sanitizing of data, and the process of transforming it into a selected representation (e.g., feature extraction) that will serve as input to a machine learning model. This also includes splitting the whole dataset into training set, validation set, and testing set.
  3. Training: Training the machine learning model on prepared data using the training method selected during design and implementation. This includes tuning hyperparameters and iterative training and validation steps required to improve model performance.
  4. Deployment & Integration: Choice of deployment platform to perform inference, and integration of the machine learning model into the overall machine learning-based system.
  5. Inference: The process whereby the model provides predictions or recommendations for inputs submitted to it.

Three questionnaires for machine learning based systems

Even though choices at different stages of a machine learning model lifecycle can reduce the system’s vulnerability to attacks, not all vulnerabilities can be fixed with simple processes or mechanisms. Decreasing the exposure of a machine learning-based system to one attack may leave it vulnerable to another, or impair its performance. Answering this questionnaire will help the respondent understand different vulnerabilities present in machine learning-based systems and allow them to compare trade-offs between performance and security. For instance, a developer may choose to disregard vulnerabilities associated with less prominent or likely security threats over defences that address known attacks.

The targeted respondents for this questionnaire are those familiar with the machine learning model lifecycle, their own machine learning systems architecture, and with machine learning concepts in general. It can be answered by data scientists, data engineers or software engineers. The questionnaire is available at the following link:

3. Security of your machine learning-based system questionnaire

The security of a machine learning-based system can be safeguarded even if it has exposed vulnerabilities. It is possible to design and deploy defensive measures capable of alleviating most security threats given knowledge of the vulnerabilities inherent in a system and how they might be exploited. This last questionnaire evaluates the security of machine learning-based systems and the respondent’s readiness to respond to potential attacks. It primarily aims to assess the security of machine learning-based systems with respect to adversarial machine learning attacks. The questionnaire does not evaluate traditional cyber security measures, which are assumed to be already met. The first part of this questionnaire focuses on a security assessment, where it evaluates how well respondents know the security of their own machine learning-based systems. The second part evaluates the respondent’s defences against adversarial machine learning attacks.

The targeted respondents for this questionnaire are people with a solid knowledge of security practices and some knowledge about machine learning-based system deployment. It can be answered by information security experts, software engineers, or data engineers. The questionnaire is available at the following link:


This survey is being conducted as part of the EU Horizon 2020 project SPATIAL, and F-Secure’s Project Blackfin. SPATIAL is an EU- funded project which investigates how to enhance AI-powered solutions in terms of accountability, privacy and resilience. This project has received funding from the European Union’s Horizon 2020 research and innovation programme, under grant agreement No 101021808. F-Secure’s Project Blackfin is a multi-year research effort with the goal of applying collective intelligence techniques to the cyber security domain.

Andrew Patel

08.12.21 10 min. read


Highlighted article

A closer look at Flubot’s DoH tunneling

Catarina de Faria Cristas


15 min. read

Related posts

Newsletter modal

Thank you for your interest towards F-Secure newsletter. You will shortly get an email to confirm the subscription.

Gated Content modal

Congratulations – You can now access the content by clicking the button below.