Scalable and Efficient Neural Speech Coding: A Hybrid Design

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Scalable and Efficient Neural Speech Coding: A Hybrid Design

By: 
Kai Zhen; Jongmo Sung; Mi Suk Lee; Seungkwon Beack; Minje Kim

We present a scalable and efficient neural waveform coding system for speech compression. We formulate the speech coding problem as an autoencoding task, where a convolutional neural network (CNN) performs encoding and decoding as a neural waveform codec (NWC) during its feedforward routine. The proposed NWC also defines quantization and entropy coding as a trainable module, so the coding artifacts and bitrate control are handled during the optimization process. We achieve efficiency by introducing compact model components to NWC, such as gated residual networks and depthwise separable convolution. Furthermore, the proposed models are with a scalable architecture, cross-module residual learning (CMRL), to cover a wide range of bitrates. To this end, we employ the residual coding concept to concatenate multiple NWC autoencoding modules, where each NWC module performs residual coding to restore any reconstruction loss that its preceding modules have created. CMRL can scale down to cover lower bitrates as well, for which it employs linear predictive coding (LPC) module as its first autoencoder. The hybrid design integrates LPC and NWC by redefining LPC’s quantization as a differentiable process, making the system training an end-to-end manner. The decoder of proposed system is with either one NWC (0.12 million parameters) in low to medium bitrate ranges (12 to 20 kbps) or two NWCs in the high bitrate (32 kbps). Although the decoding complexity is not yet as low as that of conventional speech codecs, it is significantly reduced from that of other neural speech coders, such as a WaveNet-based vocoder. For wide-band speech coding quality, our system yields comparable or superior performance to AMR-WB and Opus on TIMIT test utterances at low and medium bitrates. The proposed system can scale up to higher bitrates to achieve near transparent performance.

SPS on Twitter

  • The DEGAS Webinar Series continues this Thursday, 27 January when Dr. Michael Bronstein presents "Neural diffusion… https://t.co/rljqz7qfis
  • The Brain Space Initiative Talk Series continues on Friday, 28 January when Dr. Russell A. Poldrack presents "Towar… https://t.co/r8ykdh9Vgh
  • Attention students! The 2022 5-Minute Video Clip Contest begins soon! This year's topic, "Graph Signal Processing a… https://t.co/QTMqxDaudy
  • Students, it's time to form your teams! The 2022 Signal Processing Cup competition is underway. This year's topic,… https://t.co/fVw7tA7zTG
  • The DEGAS Webinar Series continues this Thursday, 13 January when Peter Battaglia presents "Modeling Physical Struc… https://t.co/Kndvzl8BpE

SPS Videos


Signal Processing in Home Assistants

 


Multimedia Forensics


Careers in Signal Processing             

 


Under the Radar