Unsupervised Video Summarization With Cycle-Consistent Adversarial LSTM Networks

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Unsupervised Video Summarization With Cycle-Consistent Adversarial LSTM Networks

Li Yuan; Francis Eng Hock Tay; Ping Li; Jiashi Feng

Video summarization is an important technique to browse, manage and retrieve a large amount of videos efficiently. The main objective of video summarization is to minimize the information loss when selecting a subset of video frames from the original video, hence the summary video can faithfully represent the overall story of the original video. Recently developed unsupervised video summarization approaches are free of requiring tedious annotation on important frames to train a video summarization model and thus are practically attractive. However, their performance is still limited due to the difficulty of minimizing information loss between the summary and original videos. In this paper, we address unsupervised video summarization by developing a novel Cycle-consistent Adversarial LSTM architecture to effectively reduce the information loss in the summary video. The proposed model, named Cycle-SUM, consists of a frame selector and a cycle-consistent learning based evaluator. The selector is a bi-directional LSTM network to capture the long-range relationship between video frames. To overcome the difficulty of specifying a suitable information preserving metric between original video and summary video, the evaluator is introduced to “supervise” selector to improve the video summarization quality. Specifically, the evaluator is composed of two generative adversarial networks (GANs), in which the forward GAN component is learned to reconstruct the original video from summary video, while the backward GAN learns to invert the process. We establish the relation between mutual information maximization and such cycle learning procedure and further introduce cycle-consistent loss to regularize the summarization. Extensive experiments on three video summarization benchmark datasets demonstrate a state-of-the-art performance, and show the superiority of the Cycle-SUM model compared with other unsupervised approaches.

SPS on Twitter

  • New SPS Webinar: On Wednesday, 8 February, join Dr. Roula Nassif for "Decentralized learning over multitask graphs"… https://t.co/GOgHb7vfAv
  • CALL FOR PAPERS: IEEE Signal Processing Magazine welcomes submissions for a Special Issue on Hypercomplex Signal an… https://t.co/UDvjUY2llT
  • New SPS Webinar: On 15 February, join Mr. Wei Liu, Dr. Li Chen and Dr. Wenyi Zhang presenting "Decentralized Federa… https://t.co/em0sQAK4V5
  • New SPS Webinar: On Monday, 13 February, join Dr. Joe (Zhou) Ren when he presents "Human Centric Visual Analysis -… https://t.co/Rc39HpkPKr
  • Help us illustrate the SPS story! In honor of our 75th anniversary, we need your support to capture the people, mem… https://t.co/MnYU9MzIok

SPS Videos

Signal Processing in Home Assistants


Multimedia Forensics

Careers in Signal Processing             


Under the Radar