Special Sessions

Information Security meets Adversarial Examples

organised by:

  • Matthias Kirchner (Kitware, USA)
  • Cecilia Pasquini (University of Innsbruck, Austria)
  • Ilia Shumailov (University of Cambridge, UK)

Machine learning is intensively employed in support of a variety of research problems at the core of information security and forensics (IFS), for which significant performance boosts have been obtained in the recent years through the use of data-driven approaches. However, modern machine learning (ML) techniques, including those based on deep networks, have been found to be vulnerable to malicious attacks at both training and inference stages, and an ever-growing number of research works has been demonstrating that so-called adversarial examples to reliably thwart ML decisions can be generated with little effort. This compromises the dependability of state-of-the-art ML not only in classical IFS scenarios, but in any situation that calls for learning systems with robustness against strategic adversaries. Conversely, the IFS community has itself decades of profound experience in the modeling of and the defense against adversarial attacks. Intelligent and strategic adversaries are at the core of discussions concerned with steganalysis, counter-forensics, the robustness of digital watermarking, or attempts to spoof biometric systems, amongst many others. This suggests adopting concepts and methodologies long discussed in the IFS community, especially the ones dealing with imperceptible modifications of visual data, to problems arising in the field of adversarial machine learning and vice versa. The main objective of this Special Session is thus to highlight the connection between the IFS and ML communities, and to mutually link their very own strengths and experiences to facilitate the development and application of learning systems that are dependable in adversarial scenarios. The session offers a unique and timely venue for novel research contributions discussing topics including, but not limited to:

  • Implications of adversarial attacks against ML systems on media and IFS applications
  • Application of concepts and methods developed in the IFS domain to adversarial learning scenarios and vice versa     
  • Development of novel techniques to counter adversarial attacks against ML systems