IEEE SPS Speech and Language Technical Committee E-Newsletter

Welcome!

Welcome to the March 2006 issue of the IEEE Signal Processing Society Speech and Language Technical Committee (SLTC) e-Newsletter.  It has been a very exciting last few months for the IEEE Speech and Language community, as many positive changes have occurred in the area of spoken language technology within the IEEE Signal Processing Society. We hope you enjoy reading about these changes in the articles below. Also, this is our first issue that is being distributed with our new IEEE listserv. You will find information on subscribing or unsubscribing to the listserv in a brief article below.

Contributions of news, events, publications, workshops, and career information to the newsletter are always welcome.  Please send all contributions, articles, ideas, and feedback to the SLTC e-Newsletter Editorial Board [speechnewseds <at> ieee <dot> org]. Archives of recent SLTC e-Newsletters can be found on the SLTC website

This is the first issue of the e-Newsletter published by the new editorial board. We'd like to thank Rick Rose, the previous newsletter Editor-in-Chief, for helping making our transition a smooth one. Finally, we'd also like to thank the authors who contributed articles to this issue of the newsletter: Mazin Gilbert, Mari Ostendorf, Rick Rose, and Jim Glass.

Happy Reading,
The SLTC e-Newsletter Editorial Board
Mike Seltzer, Brian Mak, and Gokhan Tur
[speechnewseds <at> ieee <dot> org]

HeadLINES

A Message from the SLTC Chair: E-Newsletter Transition by Mazin Gilbert (formerly Rahim)
Speech and Language Gets a Boost in SPS Thanks to Efforts of Ad-Hoc Committee by Michael L. Seltzer
A Letter From The Editor: Mari Ostendorf Introduces IEEE Transactions on Audio, Speech and Language Processing by Mari Ostendorf
IEEE and ACL to Hold First Joint Workshop on Spoken Language Technology by Gokhan Tur
Summary of the IEEE 2005 Workshop on Automatic Speech Recognition and Understanding by Richard Rose and James Glass
EDICS for IEEE Transactions on Audio, Speech, and Language Processing
New Listserv for SLTC E-Newsletter Distribution by Michael L. Seltzer
 

Conference and Workshop Announcements

Call for Papers:
Workshop on Joint Inference for Natural Language Processing
AAAI Workshop on Statistical and Empirical Approaches for Spoken Dialogue Systems
IEEE International Workshop on Multimedia Signal Processing
INTERSPEECH 2006: International Conference on Spoken Language Processing
SAPA 2006: ISCA Tutorial and Research Workshop on Statistical And Perceptual Audition
IEEE/ACL 2006 Workshop on Spoken Language Technology

Call for Participation:
TC-STAR OpenLab on Speech Translation
3rd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms
ICASSP 2006: International Conference on Acoustics, Speech, and Signal Processing


CAREER CENTER

Positions Available:

CASL seeks Assistant/Associate/Senior Research Scientists in Human Language Technology/Computational Linguistics
EDSST seeks PhD students in Speech Science and Technology

Transitions:

Researchers take new positions

 

back to top


A MESSAGE FROM THE SLTC CHAIR: E-NEWSLETTER TRANSITION
by Mazin Gilbert (formerly Rahim)

Over three years ago, we initiated this electronic newsletter with the aim to better connect with our community members and share with them news and recent events in the speech and language areas. We selected Rick Rose to be our first Editor in Chief given his proven leadership, strong record of accomplishment, and commitment to doing outstanding work. Rick built the newsletter from the ground up, investing a significant effort in compiling news, writing articles and creating and updating the distribution list. Thanks to Rick, we have a thriving newsletter today that reaches over 2000 members every 2-3 months.

On behalf of the Speech and Language Technical Committee, we would like to take this opportunity to thank Rick for a job well done. In addition, we would like to welcome our new Editorial board members:

Editor-in-chief:
Michael Seltzer, Microsoft 

Editors:
Gokhan Tur, SRI
Brian Mak, The Hong Kong University of Science and Technology

Over the next few months, you will notice a new look and feel for this newsletter, thanks to our new board members. We encourage you to submit events, news and suggestions to our team at [speechnewseds <at> ieee <dot> org].

back to top


Speech and Language Gets a Boost in SPS Thanks to Efforts of Ad-Hoc Committee
By Michael L. Seltzer

There have been several exciting changes made to IEEE Signal Processing Society (SPS) that increase the presence of spoken language technologies within the society. These changes are the result of a nearly two-year effort initiated by Alex Acero, Mazin Gilbert, Michael Picheny, and Isabel Trancoso.  Following ICASSP 2004, this group formed an informal committee to study the state of speech within the IEEE SPS. Specifically, they were concerned that while speech coding, recognition, and synthesis have all been historically well-represented in the SPS, much of the work in newer research areas of spoken language processing was being published and presented in journals and conferences outside the IEEE. 

After some initial research into this and other related issues, a document of their findings was presented to the IEEE SPS Executive Committee. The Executive Committee voted to form the “Ad-Hoc Committee of Advancing and Strengthening Speech” to study the issues raised by the document. Starting in March, 2005, this committee, composed of Acero, Gilbert, Picheny, Trancoso, Joseph P. Campbell, Ananth Sankar, and chaired by Jose M. F. Moura, worked with the IEEE SPS Board of Governors to bring to fruition a number of changes that significantly increase the activity in spoken language technology within the IEEE SPS.

The highlights of these changes include: 

In addition, the Ad-Hoc Committee proposed two other initiatives that impact not just the speech and language communities, but the entire SPS as well. First, they advocated for the creation of the position of Vice President of Technical Directions, responsible for communicating the interests of the various Technical Committees to the Executive Committee and to the Board of Governors. In addition, they also proposed a review of the processes and procedures required to organize technical meetings. The goal of this study is to find ways to streamline the process of organizing a conference or workshop in order to lighten the burden on volunteer organizers. Both of these proposals were also accepted by the SPS Board of Governors.

Having accomplished their goals of increasing the focus in the SPS on spoken language technologies, the ad-hoc committee disbanded at the end of 2005. The committee wished to convey their gratitude to the SPS Executive Committee, and in particular, to past SPS presidents Fred Mintzer and Rich Cox, current president Al Hero, and president-elect Jose Moura, for their vigorous support of their efforts during this process.

back to top


INTRODUCING THE IEEE Transactions on Audio, speech, and Language PROCESSING
By Mari Ostendorf, Editor-In-Chief

I am pleased to have this forum to introduce you to the IEEE Transactions on Audio, Speech and Language Processing (T-ASL), the new incarnation of the Transactions on Speech and Audio Processing.  I feel privileged to have the opportunity to guide the journal at this juncture in its history, a time of growth and change.  I also feel very lucky to be taking over for Isabel Trancoso, who so capably guided the journal the past three years and brought it to the point where it is today. Isabel has left me with a terrific board of associate editors, growing submission trends, and a greatly increased page budget to reduce the publication backlog.  Of course, this also leaves me with quite a challenge -- stepping into her shoes will not be easy. Every week that has gone by in my first two months has brought me some new reminder of how much she did... and continues to do in advising me.

Before looking to the future, it is important that I credit the vision of the Signal Processing Society Publications Board and the Speech and Audio & Electroacoustics Technical Committees, and the support of the Signal Processing Society staff.  In her editorial in the January issue of T-ASL, Isabel credits many people for their help in shaping the journal.  I do not want to repeat all of the thanks, though it is certainly due, but I will say that in my first two months as editor I have very quickly come to appreciate their contributions. I would also like to thank Mazin Gilbert and Mike Goodwin, current TC chairs, for helping me with the (ongoing) transition. Ray Liu, current VP of publications, has also been terrific and I am sure he will provide the same support Isabel enjoyed form previous VPs.

As for the new directions: the obvious change is the increased emphasis on language processing.  This is recognition that language -- both written and spoken -- is a "signal" that is an application driver of statistical signal processing theory and algorithms. However, the changes are broader, impacting both speech and audio processing as well. The best way to appreciate this is to look at the new EDICS for the journal, which is due to the combined effort of the Editorial Board and both of our Technical Committees.  The new directions build on the success of the recent special issues spearheaded by Isabel (some of which are coming out as "special sections" because of our increased page budgets). In addition to promoting language processing, as in the Speech-to-Speech Translation section in the March '06 issue, several issues bring together speech and audio processing, as in the September '05 issue on Data Mining of Speech, Audio, and Dialog and the January '06 section on Statistical and Perceptual Audio Processing.  You can look forward to additional issues in the pipeline.  Speaking of which, I would like to take this opportunity to invite new proposals for special issues -- now is the time to plan for the 2008 issues.  These issues serve the readers by bringing a group of leading edge papers together on a related topic, and they also serve to shape the future of the journal.

I'd like to close with a call for papers.  Our community for a long time was more oriented towards conference presentations than journal publications.  It is a fast moving field, and there was a time lag in journals that for a time meant that it was no longer on the cutting edge.  This has changed with online publication and electronic manuscript processing, and the Signal Processing Society has made a commitment to further reduce the publication turnaround time.  The culture of research is starting to recognize this change, and I would like to highlight its importance.  The additional space and the review process of the journal together make the published papers much higher quality than conference publications, both in terms of the extent of results presented and in the clarity of the presentation.  Even the best researchers among us, and the best writers, can benefit from the perspective of outside reviewers. Better papers reach a wider audience and have greater impact. This is not to say you should stop submitting to conferences, but rather you should take your work a step further. So, send us your best work: give it the opportunity to be even better, and give us the opportunity to increase the impact of acoustics, speech and language technology.

back to top


IEEE and ACL TO HOLD first joint Workshop on Spoken Language Technology
By Gokhan Tur

IEEE Signal Processing Society, Speech Technical Committee is organizing the first workshop on Spoken Language Technology (SLT) with the Association for Computational  Linguistics (ACL). The goal of this workshop is to bring the speech processing and natural language processing communities together to share and present recent advances in the area of spoken language technology, and to discuss and foster new research in this area. This is in accordance with the greater emphasis given to spoken language processing in the IEEE Signal Processing Society.

This workshop is complementary to the IEEE ASRU workshops which primarily focus on core speech and signal processing technologies. The SLT workshop addresses spoken language technologies such as translation, understanding, dialog, mining, summarization, annotation, information retrieval and extraction, and more. Given that spoken language technology is a vibrant research area, with the potential for significant impact on government and industrial applications, this new workshop will help to strengthen the role of the Signal Processing Society and the IEEE in general in the area of spoken language processing. It will also be a great opportunity for engineering designers and researchers of diverse sets of backgrounds to get together in a single forum to share their experiences and address new challenges in the area of human and machine communication.

The chair of the IEEE Speech Technical Committee, Dr. Mazin Gilbert (formerly Rahim), is also chairing this workshop. The workshop will take place in Aruba, in December. Please help us make this workshop a success with your contributions and participation. The details can be found from the workshop web page at http://www.slt2006.org

back to top


Summary of the IEEE 2005 Workshop on Automatic Speech Recognition and Understanding
By Richard Rose and James Glass, General Co-Chairs, ASRU 2005

The IEEE workshop on Automatic Speech Recognition and Understanding. (ASRU2005) was held in San Juan, Puerto Rico from November 28 to December 1, 2005. This was the ninth biannual workshop in a series which began at Arden House in upstate New York in 1989 and has continued uninterrupted through 2005.  This years meeting was held despite a hectic last-minute process of relocating the workshop from the original venue in Cancun, Mexico. On October 21 and 22, five weeks before the scheduled opening of the ASRU2005 workshop, hurricane Wilma spent two days ravaging the Yucatan Peninsula. Much of the local infrastructure in the region was badly damaged and it was clear that it would be impossible to hold the workshop at any venue in the vicinity of Cancun. The workshop was moved to the InterContinental San Juan Resort in Puerto Rico. We are grateful to Kay Berkling at the Polytechnic Institute of Puerto Rico for helping with local arrangements in San Juan and Nancy Sutta Berns of the IEEE Signal Processing Society for helping with arranging the facilities at the conference hotel.

The workshop was built upon the excellent research being performed in the community and made available to the workshop in the form of contributed papers. There were approximately 180 papers submitted for review with 73 papers accepted. This corresponds to an acceptance rate of 40% which makes the ASRU2005 review process the most selective in the 16 year history of the workshop series. All of the contributed papers were presented in featured poster sessions. There were a total of approximately 170 workshop attendees from North America, Europe, and Asia.

There were six special technical sessions each of which featured an invited oral presentation, presentation of contributed papers in poster format, and a panel discussion. The special technical sessions included:

The program also included two evening sessions.  The first was a session of speech and language technology demonstrations that included 11 demonstrations from industry and university laboratories. The second was a panel discussion entitled “Government and Industry Funded Speech Research: Successes and Pitfalls.” The technical program for the workshop can be found on the ASRU2005 website www.asru2005.org by clicking on “technical program” and then clicking on “detailed schedule”.  A list of special sessions, contributed papers, technology demonstrations, and the presentation slides for the invited speakers can be found on the site. 

On Tuesday afternoon there was an architectural tour of old San Juan led by the Dean of Architecture of the Polytechnic University of Puerto Rico.  This was followed by a reception and festive Puerto Rican banquet dinner which included live Bamba entertainment.

Despite the hectic beginning, the workshop was a great success.  We would like to thank the members of the organizing committee for the workshop, the members of the paper review committee, the session chairs, the invited speakers, the panel members, the authors of all of the papers submitted to the workshop, and our corporate sponsors: IBM and Microsoft. The names of all of these contributors can be found on the workshop website.

back to top


EDICS for IEEE Transactions on Audio, Speech, and Language Processing

These are the newly created EDICS for Speech Processing and Spoken Language Processing for the IEEE Transactions on Audio, Speech, and Language. For more details of each EDICS category, please consult an issue of the Transactions.

Speech Processing

Spoken Language Processing

back to top


New Listserv for SLTC e-Newsletter Distribution
by Michael L. Seltzer

We have created a new listserv hosted by IEEE for the distribution of the SLTC e-Newsletter, [ speechnewsdist <at> listserv <dot> ieee <dot> org ]. This list is intended for the purpose of disseminating news and information pertaining to the IEEE SPS Speech and Language Technical Committee (SLTC), and in particular for distributing the electronic newsletter of the SLTC. To receive the newsletter and any other related announcements, you can simply subscribe to the distribution list (tell your friends and colleagues!). To no longer receive the newsletter, you can unsubscribe from the list. Note that if you have received this issue (March 2006) by email, you are currently already subscribed to the list. If you wish to remain subscribed, no action needs to be taken.

To Subscribe:
Send an email with the command "subscribe speechnewsdist" in the message body to [ listserv <at> listserv <dot> ieee <dot> org ].

To Unsubscribe:
Send an email with the command "signoff speechnewsdist" in the message body to [ listserv <at> listserv <dot> ieee <dot> org ].

Note that subscribers cannot post to this distribution list. Please send all contributions, articles, ideas, and feedback to the SLTC e-Newsletter Editorial Board [ speechnewseds <at> ieee <dot> org ].

back to top


Call for Papers:
Workshop on Joint Inference for Natural Language Processing

Workshop at HLT/NAACL 2006
New York City, NY, USA
June 8, 2006

 

New Submission Deadline: March 31, 2006
Late-breaking paper deadline (will not appear in proceedings): May 5, 2006

Description

In NLP there has been increasing interest in moving away from systems that make chains of local decisions independently, and instead toward systems that make multiple decisions jointly using global information. For example, NLP tasks are often solved by a pipeline of processing steps (from speech, to translation, to entity extraction, relation extraction, coreference and summarization)---each of which locally chooses its output to be passed to the next step. However, we can avoid accumulating cascading errors by joint decoding across the pipeline---capturing uncertainty and multiple hypotheses throughout. The use of lattices in speech recognition is well-established, but recently there has been more interest in larger, more complex joint inference, such as joint ASR and MT, and joint extraction and coreference.

The trend toward joint decisions using global information also appears at a smaller scale. For example, the benefit of discriminative reranking is that it can efficiently exploit global features of the output space. Also, recent sequence models, such as CRFs and Maximum-margin Markov networks, are trained to optimize a global objective function over the space of all sequences, leveraging global features of the input.

The main challenge in applying joint methods more widely throughout NLP is that they are more complex and more expensive than local approaches. Various models and approximate inference algorithms have been used to maintain efficiency, such as beam search, reranking, simulated annealing, and belief propagation, but much work remains in understanding which methods are best for particular applications, or which new techniques could be brought to bear.

The goal of this workshop is to explore techniques for joint processing for NLP tasks that involve multiple, interrelated decisions. Themes of the workshop include:

Potential participants are encouraged to submit papers on these topics, and on others related to joint decision-making in NLP.

Important Dates

Format of Papers

Submit your papers at http://www.softconf.com/start/HLT-WS06-JINLP/submit.html. If you wish to present at the workshop, submit a paper of no more than 8 pages. All submissions must be received by February 10, 2006. The submitted paper should be in two column format and follow the HLT/NAACL style (see http://nlp.cs.nyu.edu/hlt-naacl06/cfp.html). Proceedings will be published in conjunction with the main HLT/NAACL proceedings. Authors who cannot submit a PDF file electronically should contact the organizers.

Organizers

back to top


Call for Papers:
AAAI Workshop on Statistical and Empirical Approaches for Spoken Dialogue Systems
B
oston, Massachusetts, USA
July 17, 2006

This workshop seeks to draw new work on statistical and empirical approaches for spoken dialogue systems. We welcome both theoretical and applied work, addressing issues such as:

This will be a one-day workshop, consisting mainly of presentations of new work by participants. Interaction will be encouraged and sufficient time will be left for discussion of the work presented. To facilitate a collaborative environment, the workshop size will be limited to authors, presenters, and a small number of other participants.

The day will also feature a keynote talk from Satinder Singh (University of Michigan), who will speak about Reinforcement Learning in the context of spoken dialogue systems.

Proceedings of the workshop will be published as an AAAI technical report.

Prospective authors are invited to submit full-length, 6-page, camera-ready papers via email. Authors are requested to use the AAAI paper template and follow the AAAI formatting guidelines.

AAAI paper template: http://www.aaai.org/Publications/Author/macros-link.html
AAAI formatting guidelines: http://www.aaai.org/Publications/Author/authorinstructions.pdf
Authors are asked to email papers to Jason Williams at jasondwilliams [at] gmail [dot] com.
 
All papers will be reviewed electronically by three reviewers. Comments will be provided and time will be given for incorporation of comments into accepted papers.

For accepted papers, at least one author from each paper is expected to register and attend. If no authors of an accepted paper register for the workshop, the paper may be removed from the workshop proceedings. Finally, authors of accepted papers will be expected to sign a standard AAAI-06 "Permission to distribute" form.

Important Dates

For additional information please contact Jason Williams:

back to top


Call for Papers:
IEEE International workshop on multimedia signal processing
Victoria, BC, Canada

October 3-6, 2006

MMSP-06 is the eighth international workshop on multimedia signal processing organized by the Multimedia Signal Processing Technical Committee of the IEEE Signal Processing Society. The MMSP-06 workshop features several new components that include:

SCOPE

Papers are solicited for, but not limited to, the general areas:

SCHEDULE

Check the workshop website http://research.microsoft.com/workshops/MMSP06 for updates.

back to top


Call for Papers:
INTERSPEECH 2006: International Conference on Spoken language processing
Pittsburgh, PA USA

September 17-21, 2006

INTERSPEECH 2006 - ICSLP, the Ninth International Conference on Spoken Language Processing dedicated to the interdisciplinary study of speech science and language technology, will be held in Pittsburgh, Pennsylvania, September 17-21, 2006, under the sponsorship of the International Speech Communication Association (ISCA).

The Interspeech meetings are considered to be the top international conference in speech and language technology, with more than 1000 attendees from universities, industry, and government agencies. They are unique in that they bring together faculty and students from universities with researchers and developers from government and industry to discuss the latest research advances, technological innovations, and products. The conference offers the prospect of meeting the future leaders of our field, exchanging ideas, and exploring opportunities for collaboration, employment, and sales through keynote talks, tutorials, technical sessions, exhibits, and poster sessions. In recent years the Interspeech meetings have taken place in a number of exciting venues including most recently Lisbon, Jeju Island (Korea), Geneva, Denver, Aalborg (Denmark), and Beijing.

ISCA, together with the Interspeech 2006 - ICSLP organizing committee, would like to encourage submission of papers for the upcoming conference in the following topics of interest:

SPECIAL SESSIONS

In addition to the regular sessions, a series of special sessions has been planned for the meeting. Potential authors are invited to submit papers for special sessions as well as for regular sessions, and all papers in special sessions will undergo the same review process as papers in regular sessions. Confirmed special sessions and their organizers include:

PAPER SUBMISSION

The deadline for submission of 4-page full papers is April 7, 2006.

Paper submission will be exclusively through the conference website, using submission guidelines to be provided. Previously-published papers should not be submitted. The corresponding author will be notified by e-mail of the paper status by June 9, 2006. Minor updates will be allowed from June 10 to June 16, 2006.

IMPORTANT DATES

For further information: http://www.interspeech2006.org or send email to info@interspeech2006.org

Organizer:
Professor Richard M. Stern (General Chair)
Carnegie Mellon University
Electrical Engineering and Computer Science
5000 Forbes Avenue
Pittsburgh, PA 15213-3890
Fax: +1 412 268-3890
email: chair@interspeech2006.org

back to top


Call for Papers:
SAPA2006: ISCA Tutorial and Research Workshop on Statistical And Perceptual Audition
Pittsburgh, PA, USA
September 16, 2006

 

Papers are solicited for the 2006 Workshop on Statistical and Perceptual Audition (SAPA2006), to be held in Pittsburgh PA as a satellite to ICSLP 2006.

Following on from the successful SAPA2004 workshop (in Jeju, Korea), the objective of the SAPA2006 workshop is to bring together researchers considering perceptually-motivated problems in sound and speech analysis and understanding, employing statistical and machine learning tools.

There is a wide area of overlap between more heuristic models of human auditory function and purely pattern recognition approaches that are independent of human audition; SAPA aims to be the forum for presentation and discussion of this promising and expanding field.

This will be a one-day workshop with a limited number of oral presentations, chosen for breadth and provocation, and an informal atmosphere to promote discussion. We hope that the participants in the workshop will be exposed to a broader perspective, and that this will help foster new research and interesting variants on current approaches.

Papers describing relevant research and new concepts are solicited on, but not limited to, the following topics:

In all cases, preference will be given to papers that clearly involve both perceptually-defined or perceptually-related problems, and statistical or machine-learning based solutions.

Manuscripts must be between 4 and 6 pages long, in standard ICSLP double-column format. Accepted papers will be published in the workshop proceedings.

Papers must be recieved by 21 April 2006 (two weeks after the ICSLP deadline). The results of the paper review will be posted by 9 June 2006 (same as ICSLP).

Additional information may be obtained from http://www.sapa2006.org

 Organizers:

Dr. Bhiksha Raj
Research Scientist
Mitsubishi Electric Research Labs,
Cambridge, MA, USA, 02139
bhiksha@merl.com
617 621 7593

Dr. Paris Smaragdis
Research Scientist
Mitsubishi Electric Research Labs,
Cambridge, MA, USA, 02139
paris@merl.com
617 621 7561

Prof. Daniel Ellis
Associate Professor
Columbia University
New York
dpwe@ee.columbia.edu
212 854 8928

back to top


Call for Papers:
IEEE/ACL 2006 Workshop on Spoken Language Technology
Palm Beach, Aruba
December 10 -13, 2006

 

The first workshop on Spoken Language Technology (SLT) sponsored by IEEE and ACL will be held on December 10-December 13, 2006. The goal
of this workshop is to bring the speech processing and natural language processing communities together to share and present recent
advances in the area of spoken language technology, and to discuss and foster new research in this area. Spoken language technology is a
vibrant research area, with the potential for significant impact on government and industrial applications.

Workshop Topics

Submissions for the Technical Program

The workshop program will consist of tutorials, oral and poster presentations, and panel discussions. Attendance will be limited with
priority for those who will present technical papers; registration is required of at least one author for each paper. Submissions are
encouraged on any of the topics listed above. The style guide, templates, and submission form will follow the IEEE ICASSP
style. Three members of the Scientific Committee will review each paper. The workshop proceedings will be published on a CD-ROM.

Schedule

Registration and Information

Registration and paper submission, as well as other workshop information, can be found on the SLT website: http://www.slt2006.org

 Organizing Committee

back to top


Call for Participation:
TC-STAR OpenLab on Speech Translation
Trento, Italy
March 30 - April 1, 2006

OpenLab 2006 is a training initiative of the European Integrated Project TC-STAR, Technologies and Corpora for Speech-to-speech Translation Research. OpenLab 2006 aims to expand the TC-STAR research community in the areas of Automatic Speech Recognition (ASR) and Spoken Language Translation (SLT).

Students and young researchers in these areas are invited to contribute on shared TC-STAR project tasks.

The translation of European Parliament speeches from Spanish to English is the application domain of interest. Contributions on the following and other closely related topics will be welcome:

Several months before the meeting in Trento, language resources and tools will be made available to interested participants. Word graphs and n-best lists generated by different ASR and SLT systems will be provided, as well as training and testing collections to develop and evaluate a SLT system. Participants will present and discuss their results in Trento, and will have the opportunity to attend tutorial speeches held by experts. Participation in OpenLab 2006 is free. In addition, for a limited number of applications, lodging expenses will be covered by the organization.

Program Chairs:
Marcello Federico, ITC-irst, Trento
Ralf Schlüter, RWTH, Aachen

back to top


Call for Participation:
3rd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms
W
ashington, DC, USA
May 1-3, 2006

The third MLMI workshop is coming to Washington DC, USA, May 1-3, 2006, following successful workshops in Martigny, Switzerland (2004) and Edinburgh, UK (2005). MLMI is a joint workshop that brings together researchers from the different communities working on the common theme of advanced machine learning algorithms for processing and structuring multimodal human interaction. The motivation for creating this multi-disciplinary workshop arose from an actual need in several of the sponsoring projects.

The workshop will feature talks (including a number of invited speakers), posters, and demonstrations in the following areas of interest:

In common with MLMI'05, the workshop will be immediately followed by the NIST meeting recognition workshop, centering on the Rich Transcription 2006 Meeting Recognition (RT-06) evaluation. This workshop will take place at the same location during 3-4 May 2006.

In common with MLMI'04 and MLMI'05, the workshop proceedings will be published by Springer, in the Lecture Notes in Computer Science (LNCS) series.

MLMI is supported by the US National Institute of Standards and Technology (NIST), through Integrated Projects and Networks of Excellence funded by the FP6 IST priority of the European Union, and through the Swiss National Science Foundation.

Supporting projects:

back to top


Call for Participation:
ICASSP 2006: International conference on acoustics, speech, and signal processing
Toulouse, France
May 14-19, 2006

Registration is now available for the 31st International Conference on Acoustics, Speech and Signal Processing (ICASSP), which will be held at the Centre des Congres Pierre Baudis in Toulouse, France, May 14-19, 2006. The ICASSP conference is the world's largest and most comprehensive technical conference focused on signal processing and its applications. The conference will feature world-class speakers, tutorials, exhibits and over 50 lecturers and poster sessions on topics such as these:

plus other emerging and specialized areas of interest.

In order to register, please visit www.icassp2006.org and review the introductory Registration information, You may then proceed to the bottom of the page to click on the Registration tab. Please note that the deadline for Advance Registration is Monday, April 10, 2006. All registrations after that time must be done on-site.

We look forward to receiving your registration materials and to welcoming you to Toulouse in May.

back to top


Positions Available:
Human Language Technology/Computational Linguistics
Assistant/Associate/Senior Research Scientists

The Center for Advanced Study of Language (CASL), is seeking to expand its research team in areas related to Human Language Technology (HLT) development, evaluation, and adaptation for integration into the workplace.

CASL, a University-Affiliated Research Center located in a newly built facility at the University of Maryland, offers HLT researchers the opportunity to pursue innovative, transdisciplinary research in teams composed of linguists, computer scientists, engineers, psychologists, anthropologists and second language acquisition specialists.  

Our mission is to pursue basic and applied research which will help government language professionals improve their on-the-job performance in linguistic analysis and foreign language training and testing. We work on complex problems with authentic data, and we test and evaluate systems with the help of working language professionals.

Our research emphasizes multilingual HLT applications (e.g. machine translation, multilingual summarization, information extraction and retrieval, and speech processing applications such as language, dialect, and speaker ID).  For instance, we have used machine learning to speed the development of such language resources as dictionaries for emerging strategic languages, and data mining and classification methods to select relevant material to enhance foreign language learning. 

In addition to our rich internal environment, CASL’s affiliation with the University of Maryland provides opportunities for collaboration with faculty and students, participation in colloquia, and utilization of the many facilities of a large research university with a highly ranked HLT faculty.

 Preference will be given to those candidates whose record indicates the ability to tackle complex, interdisciplinary research and to work with a range of institutes and researchers. Candidates must have earned a Ph.D. in an area providing training in Computational Linguistics or HLT (e.g., Computer Science, Electrical Engineering, Information Science, or Linguistics). Candidates must hold U.S. citizenship and be willing to obtain a security clearance. For information on U.S. government security clearances, please see http://www.dss.mil/psi/faq.htm.

TO APPLY: Send a letter of application, curriculum vitae including potential referees, and three representative publications to HLT Positions, CASL, University of Maryland, Box 25, College Park, MD 20742 or email jobs@casl.umd.edu. For best consideration apply before April 1, 2006. The University of Maryland is an affirmative action, equal opportunity employer. Women and minorities are encouraged to apply. Questions about this position should be sent by email to ablumberg@casl.umd.edu.

back to top


Positions Available:
EdSST - PhD positions in speech science and technology

Five PhD positions funded by the European Commission under the Marie Curie Early Stage Research Training (EST) scheme are available on the Edinburgh Speech Science and Technology (EdSST) project. EdSST is an interdisciplinary research training programme that aims close the gap between speech science and technology, focussing on a number of overlapping research areas, each of which includes components from speech science and speech technology:

For further details see: http://www.cstr.ed.ac.uk/edsst/research.html

You should have a first or upper second class honours degree or its equivalent, and/or a Masters degree, in Informatics or Linguistics. Informatics includes areas such as Artificial Intelligence, Cognitive Science, Computer Science, Information Engineering, and Computational Linguistics. Linguistics includes areas such as Phonetics, Speech Science, Speech and Language Therapy, and Human Communication Sciences. Applicants with degrees in these disciplines will also be considered: Electrical Engineering, Psychology, Mathematics, Philosophy, and Physics.

You must also fulfil European Union Marie Curie EST selection criteria.

EdSST Fellows will be expected to register for a PhD with either the University of Edinburgh or QMUC, depending on PhD topic.

Application details and further information: http://www.cstr.ed.ac.uk/edsst/opportunities.html

back to top


TRANSITIONS

The STC Newsletter would like to provide announcements of  professors, researchers, and developers in the speech and language community taking new positions.  If you have moved lately or are in the process of moving  to a new position in the near future, send us  your new contact information so it can be posted in the next edition.  

back to top