IEEE OJSP Special Issue on Adversarial Machine Learning: Bridging the Gap between In Vitro and In Vivo Research
Manuscript Due: 27 November 2023
Publication Date: June 2024
Artificial Intelligence (AI) techniques, including those based on Deep Neural Networks (DNNs), are revolutionizing the way we process and analyse data. On the negative side, increasing concerns have been raised on the security of AI whenever the presence of an informed adversary aiming at making the system fail cannot be ruled out. As a matter of fact, in the last decade, a wide range of attacks against Machine Learning models have been proposed including the fabrication of targeted and non-targeted adversarial examples, training poisoning, backdoor attacks, model inversion etc. As a reaction, several defenses have been proposed as well, to prevent the implementation of adversarial attacks, or at least to detect their presence so to ease the implementation of efficient countermeasures. In most of the cases, attacks and defenses have been designed to work in laboratory (in vitro) conditions. For instance, adversarial attacks are often carried out in the digital domain, in floating point arithmetic, paying little or no attention to the robustness constraints that they need to face with for real deployments (in vivo conditions). In a similar way, many backdoor attacks are based on non-realistic assumptions about the capabilities of the attacker. As to defenses, again, their effectiveness is often tested in laboratory conditions, on carefully controlled datasets, or by assuming a perfect knowledge of the attack, thus failing to demonstrate their real value in realistic conditions. A direct consequence of the unrealistic assumptions underlying most of the state of the art is that the real threat posed by adversarial attacks against AI risks to be overestimated, and implementing such attacks in operational conditions is often significantly more difficult than expected. At the same time, defenses thought to work in laboratory conditions may not be effective at all when operating in realistic conditions, thus giving a false (and dangerous) sense of security, or, on the contrary, no defense may be necessary in real conditions due to the limited effectiveness of attacks.
The goal of this special issue is to bridge the gap between current research on adversarial machine learning mainly carried out in vitro and research done in more realistic, in vivo, conditions. We want to attract works investigating the limitations and potential misconceptions of research in laboratory conditions, the practical aspects of adversarial machine learning and the effectiveness of attacks and defenses in realistic conditions, under realistic threat models. A non-exhaustive list of the topics addressed by this Special Issue includes:
- Adversarial examples in the physical domain
- Black-box and limited-knowledge attacks
- Cyber-physical adversarial attacks
- Backdoor attacks against practical AI systems
- Realistic threat models for adversarial AI
- Construction of realistic datasets for the benchmarking of adversarial AI
- Defenses against adversarial AI with imperfect attack knowledge
- Reduced-complexity attacks and defenses
- Explainable defenses
Manuscripts submission system: Manuscript Central system.
Important Dates
- Submissions Deadline: 27 November 2023
- First Review Decision: 29 January 2024
- Revisions Due: 26 February 2024
- Final Decision: 4 March 2024
- Final Manuscript Due: 18 March 2024
- Estimated Publication Date: June 2024
Guest Editors
- Benedetta Tondi (Lead Guest Editor)
- Mauro Barni
- Battista Biggio
- Lorenzo Cavallaro
- Konrad Rieck
- Fabio Roli