1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
SLTC Newsletter, November 2015
Modeled after the successful NIST Speaker Recognition i-Vector Machine Learning Challenge held in 2013-2014 , in 2015 NIST launched a Language Recognition i-Vector Machine Learning Challenge, which focused on open-set language identification. This Language Recognition Challenge used data from previous NIST Language Recognition Evaluations (LRE’s) and other LDC and IARPA corpora . Rather than distributing audio data as in LREs, 400-dimensional i-vectors were distributed produced by a state-of-the-art system from MITLL and JHU HLT Center of Excellence. Using the i-vector representation made the evaluation more accessible to participants from outside the audio processing community and allowed for a more direct comparison of the different back-ends by removing the burden of audio processing and providing a common system front-end.
The Challenge covered 50 target languages and a set of unnamed “out-of-set” languages. Labeled training data (300 i-vectors per language) were provided for the target languages and a set of approximately 6,500 unlabeled i-vectors covering the target and out-of-set languages was provided for development. The test set consisted of approximately 6,500 test segments covering the target and out-of-set languages. Unlike traditional LRE’s, where audio segments contain nominally 3, 10, or 30 seconds of speech, the speech duration of the audio segments used to create the i-vectors for the challenge were sampled from a log-normal distribution with a mean of approximately 35s.
The participation level in the Language Recognition i-Vector Machine Learning Challenge was the highest in LRE history. 148 participants from 31 countries registered to take part in the Challenge, and 59 submitted a total of 3877 system outputs. Compared to the most recent NIST LRE, with 22 participants and 58 submissions, the Challenge had a significant increase in the number of participants and submissions, suggesting that the Challenge was successful in reaching a broader community.
At the end of the official Challenge period, 93% of Challenge participants submitted a system that outperformed a pre-defined baseline system. The leading system in the challenge demonstrated an approximate 55% relative improvement over the baseline.
NIST intends to organize a discussion of the 2015 challenge and results, possibly at the 2016 Odyssey Speaker and Language Recognition Workshop, to be held during June of 2016 in Bilbao, Spain .
For more information about the Challenge itself, see the plan http://www.nist.gov/itl/iad/mig/upload/lre_ivectorchallenge_rel_v2.pdf. Please note that while the official period for the Challenge is over and the leaderboard is no longer being updated, the scoring platform is still available for experimentation. To conduct your own LRE i-vector experiments, visit the challenge platform https://ivectorchallenge.nist.gov. If you have comments, corrections, or additions to this article, please contact: email@example.com.
Craig S. Greenberg, Désiré Bansé, Alvin F. Martin, George R. Doddington, and Audrey Tong are with NIST. John M. Howard is with Systems Plus. Daniel Garcia-Romero and Alan McCree are with The Johns Hopkins University, HLT-COE. Jaime Hernández-Cordero and Lisa P Mason are with US DoD. Douglas A Reynolds and Elliot Singer are with MIT Lincoln Labs.
© Copyright 2022 IEEE – All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.