1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
10 years of news and resources for members of the IEEE Signal Processing Society
Title: PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition
Date: 17 August 2022
Time: 10:00 AM Eastern (New York time)
Duration: Approximately 1 Hour
Presenters: Dr. Qiuqiang Kong
Based on the IEEE Xplore® article: PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition
Published: IEEE/ACM Transactions on Audio, Speech, and Language Processing, October 2020, available in IEEE Xplore®
Audio pattern recognition is an important research topic in the machine learning area, and includes several tasks such as audio tagging, acoustic scene classification, music classification, speech emotion classification, and sound event detection. Recently, neural networks have been applied to tackle audio pattern recognition problems. However, previous systems are built on specific datasets with limited durations. Recently, in computer vision and natural language processing, systems pretrained on large-scale datasets have generalized well to several tasks. However, there is limited research on pretraining systems on large-scale datasets for audio pattern recognition. In this work, we propose pretrained audio neural networks (PANNs) trained on the large-scale AudioSet dataset. These PANNs are transferred to other audio related tasks. We investigate the performance and computational complexity of PANNs modeled by a variety of convolutional neural networks. We propose an architecture called Wavegram-Logmel-CNN using both log-mel spectrogram and waveform as input feature. Our best PANN system achieves a state-of-the-art mean average precision (mAP) of 0.439 on AudioSet tagging, outperforming the best previous system of 0.392. We transfer PANNs to six audio pattern recognition tasks, and demonstrate state-of-the-art performance in several of those tasks. We have released the source code and pretrained models of PANNs.
Qiuqiang Kong received his Ph.D. degree from the University of Surrey, Guildford, U.K., in 2019.
Following his Ph.D., he joined ByteDance as a research scientist. His research topic includes the classification, detection, and separation of general sounds and music. He is known for developing pretrained audio neural networks (PANNs) for audio tagging and winning the detection and classification of acoustic scenes and events (DCASE) challenge in 2017. He is known for transcribing the largest piano MIDI dataset GiantMIDI-Piano in the world.
Dr. Kong has co-authored over 50 papers in journals and conferences, including IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), ICASSP, INTERSPEECH, IJCAI, DCASE, EUSIPCO, LVA-ICA, etc. He has been cited 2024 times, with an H-index 24 till July 2022. He was nominated as the postgraduate research student of the year at the University of Surrey in 2019 and is a frequent reviewer for world well-known journals and conferences, including TASLP, TMM, SPL, TKDD, JASM, EURASIP, Neurocomputing, Neural Networks, ISMIR, and CSMT. He assisted with organizing the LVA-ICA 2018 and DCASE 2018 workshops and is serving as a co-editor for the journal Frontiers in Signal Processing.
© Copyright 2023 IEEE – All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.