The technology we use, and even rely on, in our everyday lives –computers, radios, video, cell phones – is enabled by signal processing. Learn More »
1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
Developing Semi-Supervised Seq2Seq (S4) learning for sequence transduction tasks in natural language processing (NLP), e.g. semantic parsing, is challenging, since both the input and the output sequences are discrete. This discrete nature makes trouble for methods which need gradients either from the input space or from the output space. Recently, a new learning method called joint stochastic approximation is developed for unsupervised learning of fixed-dimensional autoencoders and theoretically avoids gradient propagation through discrete latent variables, which is suffered by Variational Auto-Encoders (VAEs). In this letter, we propose seq2seq Joint-stochastic-approximation AutoEncoders (JAEs) and apply them to S 4 learning for NLP sequence transduction tasks. Further, we propose bi-directional JAEs (called bi-JAEs) to leverage not only unpaired input sequences (which is most commonly studied) but also unpaired output sequences. Experiments on two benchmarking datasets for semantic parsing show that JAEs consistently outperform VAEs in S4 learning and bi-JAEs yield further improvements.
Home | Sitemap | Contact | Accessibility | Nondiscrimination Policy | IEEE Ethics Reporting | IEEE Privacy Policy | Terms | Feedback
© Copyright 2024 IEEE - All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A public charity, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.