SPM Articles

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

SPM Articles

Visualizing information inside objects is an everlasting need to bridge the world from physics, chemistry, and biology to computation. Among all tomographic techniques, terahertz (THz) computational imaging has demonstrated its unique sensing features to digitalize multidimensional object information in a nondestructive, nonionizing, and noninvasive way.

Electromagnetic (EM) imaging is widely applied in sensing for security, biomedicine, geophysics, and various industries. It is an ill-posed inverse problem whose solution is usually computationally expensive. Machine learning (ML) techniques and especially deep learning (DL) show potential in fast and accurate imaging. However, the high performance of purely data-driven approaches relies on constructing a training set that is statistically consistent with practical scenarios, which is often not possible in EM-imaging tasks. Consequently, generalizability becomes a major concern.

The compressive sensing (CS) scheme exploits many fewer measurements than suggested by the Nyquist–Shannon sampling theorem to accurately reconstruct images, which has attracted considerable attention in the computational imaging community. While classic image CS schemes employ sparsity using analytical transforms or bases, the learning-based approaches have become increasingly popular in recent years. Such methods can effectively model the structure of image patches by optimizing their sparse representations or learning deep neural networks while preserving the known or modeled sensing process. 

In addition to the impressive predictive power of machine learning (ML) models, more recently, explanation methods have emerged that enable an interpretation of complex nonlinear learning models, such as deep neural networks. Gaining a better understanding is especially important, e.g., for safety-critical ML applications or medical diagnostics and so on. Although such explainable artificial intelligence (XAI) techniques have reached significant popularity for classifiers, thus far, little attention has been devoted to XAI for regression models (XAIR). 
In many modern data science problems, data are represented by a graph (network), e.g., social, biological, and communication networks. Over the past decade, numerous signal processing and machine learning (ML) algorithms have been introduced for analyzing graph structured data. With the growth of interest in graphs and graph-based learning tasks in a variety of applications, there is a need to explore explainability in graph data science.
Data-driven solutions are playing an increasingly important role in numerous practical problems across multiple disciplines. The shift from the traditional model-driven approaches to those that are data driven naturally emphasizes the importance of the explainability of solutions, as, in this case, the connection to a physical model is often not obvious. Explainability is a broad umbrella and includes interpretability, but it also implies that the solutions need to be complete, in that one should be able to “audit” them, ask appropriate questions, and hence gain further insight about their inner workings.
Self-supervised representation learning (SSRL) methods aim to provide powerful, deep feature learning without the requirement of large annotated data sets, thus alleviating the annotation bottleneck-one of the main barriers to the practical deployment of deep learning today. These techniques have advanced rapidly in recent years, with their efficacy approaching and sometimes surpassing fully supervised pretraining alternatives across a variety of data modalities, including image, video, sound, text, and graphs.
The dramatic success of deep learning is largely due to the availability of data. Data samples are often acquired on edge devices, such as smartphones, vehicles, and sensors, and in some cases cannot be shared due to privacy considerations. Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local data sets, without explicitly exchanging the data. Learning in a federated manner differs from conventional centralized machine learning and poses several core unique challenges and requirements, which are closely related to classical problems studied in the areas of signal processing and communications.
A window function is a mathematical function that is zero valued outside some chosen interval [1] , [2] . For applications like filtering, detection, and estimation, the window functions take the form of limited time functions, which are in general real and even functions [3] , [4] , while for applications like beamforming and image processing, they are limited spatial functions. A spatial window can be a complex function for optimizing the beams in magnitude as well as in phase, as in the case of certain antenna arrays, where the phasor currents in the array are complex numbers [5].
Multiscale 3D characterization is widely used by materials scientists to further their understanding of the relationships between microscopic structure and macroscopic function. Scientific computed tomography (SCT) instruments are one of the most popular choices for 3D nondestructive characterization of materials at length scales ranging from the angstrom scale to the micron scale. These instruments typically have a source of radiation (such as electrons, X-rays, or neutrons) that interacts with the sample to be studied and a detector assembly to capture the result of this interaction (see Figure 1 ).

Pages

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel