The Hitchhiker’s Guide to Bias and Fairness in Facial Affective Signal Processing: Overview and techniques

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

The Hitchhiker’s Guide to Bias and Fairness in Facial Affective Signal Processing: Overview and techniques

By: 
Jiaee Cheong; Sinan Kalkan; Hatice Gunes

Given the increasing prevalence of facial analysis technology, the problem of bias in the tools is now becoming an even greater source of concern. Several studies have highlighted the pervasiveness of such discrimination, and many have sought to address the problem by proposing solutions to mitigate it. Despite this effort, to date, understanding, investigating, and mitigating bias for facial affect analysis remain an understudied problem. In this work we aim to provide a guide by 1) providing an overview of the various definitions of bias and measures of fairness within the field of facial affective signal processing and 2) categorizing the algorithms and techniques that can be used to investigate and mitigate bias in facial affective signal processing. We present the opportunities and limitations within the current body of work, discuss the gathered findings, and propose areas that call for further research.

Introduction

Facial analysis, including identity recognition, facial attribute classification (e.g., age, gender, and race), and affect prediction from facial images, has been widely studied in the literature, with increasing applications in various domains ranging from medicine, marketing, and surveillance [1]. With advances in machine learning, facial analysis is now dominantly performed using learning-based approaches. However, these data-driven, data-hungry, and black-box learning-based approaches make facial analysis biased and unfair toward certain demographic groups. Left unaddressed, bias can lead to concerning unfairness issues, such as misidentifying people of a certain demographic group (e.g., female African-American, would be misclassified as male more often than someone of another demographic group (e.g., male Caucasian as female) [1], or predicting a higher probability of criminality for certain groups of people [2]. A biased depression analyzer (https://dl.acm.org/doi/abs/10.1145/3107990.3108004) or chronic pain detector  (https://ieeexplore.ieee.org/abstract/document/7173007) can have serious consequences if such technology is incorporated in the health-care domain. Given the far-reaching, real-life consequences that such threats pose, this issue has become a major concern. Within the public, this pressing issue has elicited widespread activism that eventually prompted governmental institutes such as the European Commission to set up a regulatory framework to address it [3].

SPS on Twitter

  • The DEGAS Webinar Series continues on Thursday, 2 December when Dr. Michael Schaub presents "Signal processing on g… https://t.co/OwsuUlG2jT
  • Save 50% on IEEE Student Membership to the ultimate network for electrical engineering and computer science student… https://t.co/gutP0cgf4y
  • SPS needs your support! is approaching. If our program receives 30 unique donations of US$10 or… https://t.co/OlN1h1limu
  • On 9 December, the IEEE SPS Sensor Array and Multichannel Technical Committee Webinar Series will feature a talk by… https://t.co/ZptS3D33ij
  • The SPS Webinar Series continues on Friday, 10 December when Dr. Yu Liu presents "Image Fusion with Convolutional S… https://t.co/XhH5XttbsA

SPS Videos


Signal Processing in Home Assistants

 


Multimedia Forensics


Careers in Signal Processing             

 


Under the Radar