A 460 GOPS/W Improved Mnemonic Descent Method-Based Hardwired Accelerator for Face Alignment

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

A 460 GOPS/W Improved Mnemonic Descent Method-Based Hardwired Accelerator for Face Alignment

By: 
Huiyu Mo; Leibo Liu; Wenping Zhu; Qiang Li; Shouyi Yin; Shaojun Wei

The mnemonic descent method (MDM) algorithm is the first end-to-end recurrent convolutional system for high-accuracy face alignment. However, the heavy computational complexity and high memory access demands make it difficult to satisfy the requirements of real-time applications. To address this problem, an improved MDM (I-MDM) algorithm is proposed for efficient hardware implementation based on several hardware-oriented optimizations. First, a patch merging mechanism is introduced to dynamically cluster and eliminate redundant landmarks, which significantly reduces computational complexity with minimal accuracy loss. Second, a dedicated convolutional layer is inserted to halve the number of computations and memory access of the subsequent fully connected layer, yielding a 4.42% decrease in the failure rate. Third, a lightweight preprocessing method named dual regressors is proposed to reinitialize face images, which can greatly improve the overall accuracy. Moreover, compared with a similar method, the DR method can reduce computations and memory storage by nearly 99.9%. Overall and compared with the MDM algorithm, I-MDM not only reduces the number of computations by 23.5% but also decreases the failure rate by 17.9% on the 300 W test set. Based on the proposed I-MDM algorithm, an I-MDM-based hardwired accelerator is presented using the TSMC 65 nm CMOS process. First, compared with similar solutions, the gradient calculation operation is rearranged and loaded pixels are reused in the HoG feature extraction to eliminate all division operations and 25% off-chip memory access. Second, patch-independent central activations are used to enable patch-level pipelined operations, yielding a 2× acceleration in the overall process. This accelerator achieves 460 GOPS/W energy efficiency at 330 MHz, which is 38× higher than the most recent face alignment accelerator with the same process.

SPS on Twitter

  • Registration for ICIP 2021 is now open! This hybrid event will take place 19-22 September, with the in-person compo… https://t.co/s3kiGP4EPh
  • The Brain Space Initiative Talk Series continues on Friday, 30 July when Dr. Ioulia Kovelman presents "The Bilingua… https://t.co/6EqwqmBD0Q
  • There’s still time to register your team to win the US$5,000 grand prize in the 5-Minute Video Clip Contest, “Autom… https://t.co/76kh4jeL6i
  • Join the SPS Vizag Bay, Long Island, and Finland Chapters for the Seasonal School on Signal Processing and Communic… https://t.co/l04xac8qP5
  • Calling students and graduate students! The 5-Minute Video Clip Contest returns for ICIP 2021, and there's still ti… https://t.co/4hxYYY2Va3

SPS Videos


Signal Processing in Home Assistants

 


Multimedia Forensics


Careers in Signal Processing             

 


Under the Radar