Face Anti-Spoofing With Deep Neural Network Distillation

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Face Anti-Spoofing With Deep Neural Network Distillation

By: 
Haoliang Li; Shiqi Wang; Peisong He; Anderson Rocha

One challenging aspect in face anti-spoofing (or presentation attack detection, PAD) refers to the difficulty of collecting enough and representative attack samples for an application-specific environment. In view of this, we tackle the problem of training a robust PAD model with limited data in an application-specific domain. We propose to leverage data from a richer and related domain to learn meaningful features through the concept of neural network distilling. We first train a deep neural network based on reasonably sufficient labeled data in an attempt to “teach” a neural network for the application-specific domain for which training samples are scarce. Subsequently, we form training sample pairs from both domains and formulate a novel optimization function by considering the cross-entropy loss, as well as maximum mean discrepancy of features and paired sample similarity embedding for network distillation. Thus, we expect to capture spoofing-specific information and train a discriminative deep neural network on the application-specific domain. Extensive experiments validate the effectiveness of the proposed scheme in face anti-spoofing setups. 

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel