Quaternions in Signal and Image Processing: A comprehensive and objective overview

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Quaternions in Signal and Image Processing: A comprehensive and objective overview

By: 
Sebastian Miron; Julien Flamant; Nicolas Le Bihan; Pierre Chainais; David Brie

Quaternions are still largely misunderstood and often considered an “exotic” signal representation without much practical utility despite the fact that they have been around the signal and image processing community for more than 30 years now. The main aim of this article is to counter this misconception and to demystify the use of quaternion algebra for solving problems in signal and image processing. To this end, we propose a comprehensive and objective overview of the key aspects of quaternion representations, models, and methods and illustrate our journey through the literature with flagship applications. We conclude this work by an outlook on the remaining challenges and open problems in quaternion signal and image processing.

History, Background and Aim of the Article

DL has revolutionized engineering and the sciences in the modern data age. The typical goal of DL is to predict an output yY (e.g., a label or response) from an input xX (e.g., a feature or example). An NN is “trained” to fit to a set of data consisting of the pairs {(xn,yn)}Nn=1 by finding a set of NN parameters θ so that the NN mapping closely matches the data. The trained NN is a function, denoted by fθ:XY, that can be used to predict the output yY of a new input xX. This paradigm is referred to as supervised learning, which is the focus of this article. The success of DL has spawned a burgeoning industry that is continually developing new applications, NN architectures, and training algorithms. This article reviews recent developments in the mathematics of DL, focused on the characterization of the kinds of functions learned by NNs fit to data. There are currently many competing theories that explain e success of DL. These developments are part of a wider body of theoretical work that can be crudely organized into three broad categories: 1) approximation theory with NNs, 2) the design and analysis of optimization (“training”) algorithms for NNs, and 3) characterizations of the properties of trained NNs.

 

SPS ON X

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel