Special Sessions

Information Security meets Adversarial Examples

organised by:

  • Matthias Kirchner (Kitware, USA)
  • Cecilia Pasquini (University of Innsbruck, Austria)
  • Ilia Shumailov (University of Cambridge, UK)

Machine learning is intensively employed in support of a variety of research problems at the core of information security and forensics (IFS), for which significant performance boosts have been obtained in the recent years through the use of data-driven approaches. However, modern machine learning (ML) techniques, including those based on deep networks, have been found to be vulnerable to malicious attacks at both training and inference stages, and an ever-growing number of research works has been demonstrating that so-called adversarial examples to reliably thwart ML decisions can be generated with little effort. This compromises the dependability of state-of-the-art ML not only in classical IFS scenarios, but in any situation that calls for learning systems with robustness against strategic adversaries. Conversely, the IFS community has itself decades of profound experience in the modeling of and the defense against adversarial attacks. Intelligent and strategic adversaries are at the core of discussions concerned with steganalysis, counter-forensics, the robustness of digital watermarking, or attempts to spoof biometric systems, amongst many others. This suggests adopting concepts and methodologies long discussed in the IFS community, especially the ones dealing with imperceptible modifications of visual data, to problems arising in the field of adversarial machine learning and vice versa. The main objective of this Special Session is thus to highlight the connection between the IFS and ML communities, and to mutually link their very own strengths and experiences to facilitate the development and application of learning systems that are dependable in adversarial scenarios. The session offers a unique and timely venue for novel research contributions discussing topics including, but not limited to:

  • Implications of adversarial attacks against ML systems on media and IFS applications
  • Application of concepts and methods developed in the IFS domain to adversarial learning scenarios and vice versa     
  • Development of novel techniques to counter adversarial attacks against ML systems

 

Blockchain Based Trust and Provenance: Opportunities and Challenges Significance

organised by:

  • Karthik Nandakumar (IBM Singapore Lab)
  • Sharath Pankanti (IBM Thomas J. Watson Research Center)
  • Nalini Ratha (IBM Thomas J. Watson Research Center)

 

As the world moves towards a sharing economy, decentralization and trust will be central to success. Blockchain is a foundational technology that allows humans to reach consensus on a shared digital history without a middleman. Blockchain enables a network of participants to pool together data from diverse sources and gain new insights from them, while being assured of the integrity and provenance of the underlying data sources. In a decentralized world, blockchain-based infrastructure is essential to create the necessary trust between diverse stakeholders. For example, many complex practical challenges such as (i) stakeholder identity management, (ii) compliance with regulations (e.g., provenance of data, compliance of information systems with local laws), (iii) integrity of data, and (iv) protecting privacy of sensitive information (e.g., video redaction, sharing of insurance claims or patient data could be effectively addressed using blockchain technology. Of these problems, research issues related to data integrity and protection of sensitive information are of interest to the WIFS community. This special session would be an excellent venue for novel research contributions that address all aspects of data integrity and privacy, information security, and signal processing in a distributed setting leveraging blockchain infrastructure.