Nemesis ’18

1st Workshop on
Recent Advances in
Adversarial Machine Learning

co-located with ECML/PKDD 2018

Friday, September 14, 2018

Dublin, Ireland



Adversarial attacks of Machine Learning systems have become an undisputable threat. Attackers can compromise the training of Machine Learning models by injecting malicious data into the training set (so-called poisoning attacks), or by crafting adversarial samples that exploit the blind spots of Machine Learning models at test time (so-called evasion attacks). Adversarial attacks have been demonstrated in a number of different application domains, including malware detection, spam filtering, visual recognition, speech-to-text conversion, and natural language understanding. Devising comprehensive defences against poisoning and evasion attacks by adaptive adversaries is still an open challenge. Thus, gaining a better understanding of the threat by adversarial attacks and developing more effective defence systems and methods is paramount for the adoption of Machine Learning systems in security-critical real-world applications.

The Nemesis ’18 tutorial and workshop aims to bring together researchers and practitioners to discuss recent advances in the rapidly evolving field of Adversarial Machine Learning. Particular emphasis will be on:

  • Reviewing both theoretical and practical aspects of Adversarial Machine Learning;
  • Sharing experience from Adversarial Machine Learning in various business applications, including (but not limited to): malware detection, spam filtering, visual recognition, speech-to-text conversion and natural language understanding;
  • Discussing adversarial attacks both from a Machine Learning and Security/Privacy perspective;
  • Gaining hands-on experience with the latest tools for researchers and developers working on Adversarial Machine Learning;
  • Identifying strategic areas for future research in Adversarial Machine Learning, with a clear focus on how that will advance the security of real-world Machine Learning applications against various adversarial threats.


Workshop chair

Program committee chairs

Program committee

  • Naveed Akhtar, University of Western Australia
  • Pin-Yu Chen, IBM Research
  • David Evans, University of Virginia
  • Alhussein Fawzi, DeepMind
  • Kathrin Grosse, University of Saarland
  • Tianyu Gu, Uber ATG
  • Aleksander Madry, MIT
  • Jan Hendrik Metzen, Bosch Center for AI
  • Luis Munoz-Gonzalez, Imperial College London
  • Florian Tramer, Stanford University
  • Valentina Zantedeschi, Jean Monnet University
  • Xiangyu Zhang, Purdue University

Call for Papers

Call for Papers

There is an exploding body of literature on Adversarial Machine Learning, however, several key questions remain unanswered:

  • What is the reason for the existence of adversarial examples and their transferability between different Machine Learning models?
  • How can the space of adversarial examples be characterized, in particular, relative to the data manifold and learned representations of the data?
  • Are there provable limitations of the robustness guarantees that adversarial defences can provide, in particular in the case of white-box attacks or adaptive adversaries?
  • How strong is the adversarial threat for data modes other than images, e.g., text or speech?
  • How to design defences that address threats from combinations of poisoning and evasion attacks?

The submission page is now open.



  • Paper submission deadline:
    Monday, July 2, 2018
  • Notification of acceptance:
    Monday, July 23, 2018
  • Camera-ready version due:
    Monday, August 13, 2018
  • Workshop date:
    Friday, September 14, 2018

Topics of Interest

The workshop will solicit contributions including (but not limited to)
the following topics

Theory of adversarial machine learning

  • Space of adversarial examples
  • Transferability
  • Learning theory
  • Data privacy
  • Metrics of adversarial robustness

Adversarial attacks

  • Data poisoning
  • Evasion
  • Model theft
  • Attacks for different data modes, in particular text / natural language understanding
  • Attacks by adaptive adversaries

Adversarial defences

  • Data poisoning
  • Evasion
  • Model theft
  • Model hardening
  • Input data preprocessing
  • Robust model architectures
  • Defences against adaptive adversaries

Applications and demonstrations

  • Real-world examples and use cases of adversarial threats and defences against those

Submission Format

Submission Format

The workshop invites two types of submissions: full research papers and extended abstracts. Accepted full research contributions will be published by Springer in the workshop’s proceedings. Extended abstracts are meant to cover preliminary research ideas and results. Submissions will be evaluated on the basis of significance, originality, technical quality and clarity. Only work that has not been previously published in proceedings and is not under review will be considered.

Papers must be written in English and formatted according to the Springer LNCS guidelines. Author instructions, style files and copyright form can be downloaded here. Full research papers must be up to ten pages long (excluding references). Extended abstracts must be up to six pages long (excluding references). To be considered, papers must be submitted before the deadline (see Important Dates section). Electronic submissions will be handled via Easy Chair. Submissions should include the authors’ names and affiliations, as the review process is single-blind. For each accepted paper, at least one author must attend the workshop and present the paper.



The objective of the tutorial is provide a comprehensive introduction to adversarial machine learning. In the first part, we provide a general overview of the field and formalize the threat vectors, each exemplified with specific attacks and defences. The second part of the tutorial will provide hands-on experience with the recently released Adversarial Robustness Toolbox, an open source Python library containing state-of-the-art adversarial attacks, defences and metrics for assessing the robustness of Machine Learning models. The tutorial targets both newcomers in Adversarial Machine Learning, but also experienced researchers and developers who have experience with other similar tools and are familiar with workflows for developing, testing and deploying defences against adversarial attacks.

Program (tentative)

Time Program Presenter
09:00–09:05 Welcome to the tutorial Organizers


Overview of Adversarial Machine Learning Ian Molloy
10:30–10:45 Coffee break  
10:45–12:00 Overview of Adversarial AI Mathieu Sinn
12:00–13:00 Hands-on session with the Adversarial Robustness Toolbox Irina Nicolae
13:00–14:00 Lunch  
14:00–14:05 Welcome to the workshop Organizers
14:05–15:00 Keynote Battista Biggio
Pattern Recognition and Applications (PRA) Lab
15:00–15:45 Presentations of accepted papers TBA
15:45–16:00 Coffee break  
16:00–17:00 Presentations of accepted papers TBA
17:00–18:00 Poster session TBA

Venue and Registration

Venue and Registration

This workshop is co-located with the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.

For information about the venue, please visit the ECML/PKDD 2018 website.

All participants need to register. Information about registration and fees can be found here.