1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
Big data can be a blessing: with very large training data sets it becomes possible to perform complex learning tasks with unprecedented accuracy. Yet, this improved performance comes at the price of enormous computational challenges. Thus, one may wonder: Is it possible to leverage the information content of huge data sets while keeping computational resources under control? Can this also help solve some of the privacy issues raised by large-scale learning? This is the ambition of compressive learning, where the data set is massively compressed before learning. Here, a "sketch" is first constructed by computing carefully chosen nonlinear random features [e.g., random Fourier (RF) features] and averaging them over the whole data set. Parameters are then learned from the sketch, without access to the original data set. This article surveys the current state of the art in compressive learning, including the main concepts and algorithms, their connections with established signal processing methods, existing theoretical guarantees on both information preservation and privacy preservation, and important open problems. For an extended version of this article that contains additional references and more in-depth discussions on a variety of topics, see .
© Copyright 2021 IEEE – All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.