| Abstract:Security-oriented applications of signal processing have received increasing attention in the last years. Digital watermarking, steganography and steganalysis, multimedia forensics, biometric signal processing, video-surveillance, are just a few examples of such an interest. In many cases, though, researchers (especially signal processing researchers) have failed to recognize the single most unique feature behind any security-oriented application, i.e. the presence of one or more adversaries aiming at making the system fail. One of the most evident consequences is that security requirements are misunderstood. This has long been the case, for instance, in digital watermarking, where it took several years to recognize that robustness and security are contrasting requirements calling for the adoption of different countermeasures. In a similar way, security issues in biometric research are often neglected, privileging pattern recognition issues more related to robustness than security. Similar concerns apply to multimedia forensics, network flow analysis, spam filtering.
Even when the need to cope with the actions of a malevolent adversary is taken into account, the proposed solutions are often ad-hoc, failing to provide a unifying view of the challenges that such a scenario poses from a signal processing perspective. Times are ripe to go beyond this limited view and lay the basis for a general theory that takes into account the impact that the presence of an adversary has on the design of effective signal processing tools, i.e. a theory of adversarial signal processing.
It is the goal of this talk to present the challenges and opportunities that the development of such a theory poses and to summarize some scattered steps made in this direction in various fields including watermarking, multimedia forensics, biometry, adversary aware classification. |