Semi-Supervised Seq2seq Joint-Stochastic-Approximation Autoencoders With Applications to Semantic Parsing

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Semi-Supervised Seq2seq Joint-Stochastic-Approximation Autoencoders With Applications to Semantic Parsing

By: 
Yunfu Song; Zhijian Ou

Developing Semi-Supervised Seq2Seq (S4) learning for sequence transduction tasks in natural language processing (NLP), e.g. semantic parsing, is challenging, since both the input and the output sequences are discrete. This discrete nature makes trouble for methods which need gradients either from the input space or from the output space. Recently, a new learning method called joint stochastic approximation is developed for unsupervised learning of fixed-dimensional autoencoders and theoretically avoids gradient propagation through discrete latent variables, which is suffered by Variational Auto-Encoders (VAEs). In this letter, we propose seq2seq Joint-stochastic-approximation AutoEncoders (JAEs) and apply them to S 4 learning for NLP sequence transduction tasks. Further, we propose bi-directional JAEs (called bi-JAEs) to leverage not only unpaired input sequences (which is most commonly studied) but also unpaired output sequences. Experiments on two benchmarking datasets for semantic parsing show that JAEs consistently outperform VAEs in S4 learning and bi-JAEs yield further improvements.

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel