SPS Feed

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

The Latest News, Articles, and Events in Signal Processing

The IEEE Signal Processing Society (SPS) announces the 2023 Class of Distinguished Lecturers and Distinguished Industry Speakers for the term of 1 January 2023 to 31 December 2024.  The IEEE SPS Distinguished Lecturer (DL) Program provides a means for Chapters to have access to well-known educators and authors in the fields of signal processing to lecture at Chapter meetings.

Are you looking for innovative ways to energize and collaborate with your local signal processing community? Consider hosting a Seasonal School or Member Driven Initiative!

March 21-24, 2023
Location: Snowbird, UT, USA

Date: 5-10 December 2022
Registeration Deadline: 25 November 2022
Location: Andhra Pradesh, India

FAPESP
The research applies machine learning techniques to predict floods using data from sensors deployed in São Carlos - SP. Candidates for this position must have obtained their Ph.D. in CS (or related fields) in the last 5 years. Other requirements are to have authored articles in the area and to demonstrate experience in research and software development, particularly in Python.

Audio pattern recognition is an important research topic in the machine learning area, and includes several tasks such as audio tagging, acoustic scene classification, music classification, speech emotion classification and sound event detection. In this blog, we introduce pretrained audio neural networks (PANNs) trained on the large-scale AudioSet dataset. These PANNs are transferred to other audio related tasks. We investigate the performance and computational complexity of PANNs modeled by a variety of convolutional neural networks. We propose an architecture called Wavegram-Logmel-CNN using both log-mel spectrogram and waveform as input feature.

Recent years, face recognition has made a remarkable breakthrough due to the emergence of deep learning. However, compared with frontal face recognition, many deep face recognition models still suffer serious performance degradation when handling profile faces. To address this issue, we propose a novel Frontal-Centers Guided Loss (FCGFace) to obtain highly discriminative features for face recognition. Most existing discriminative feature learning approaches project features from the same class into a separated latent subspace.

Date: 13 December 2022
Time: 10:00 AM ET (New York Time)
Title: Deep Learning for All-in-Focus Imaging
Registration | Full webinar details

Date: 6 December 2022
Time: 3:00 PM (Central European Time (CET))
Title: Artificial Intelligence for Applications in Neurology
Registration | Full webinar details

Brain data are inherently large scale, multidimensional, and noisy. Indeed, advances in imaging and sensor technology allow recordings of ever-increasing spatio-temporal resolution. Multidimensional, as time series data are recorded at multiple locations (electrodes, voxels), from multiple subjects, under various conditions.

Date: 14 December 2022
Time: 3:00 PM (Paris Time)
Title: Subgraph-Based Networks for Expressive, Efficient, and Domain-Independent Graph Learning

Date: 21 December 2022
Time: 8:00 AM (PST) | 5:00 PM (CET)
Title: Active Inference
Full webinar details

In the cognitive neurosciences and machine learning, we have formal ways of understanding and characterising perception and decision-making; however, the approaches appear very different: current formulations of perceptual synthesis call on theories like predictive coding and Bayesian brain hypothesis. 

While message-passing neural networks (MPNNs) are the most popular architectures for graph learning, their expressive power is inherently limited. In order to gain increased expressive power while retaining efficiency, several recent works apply MPNNs to subgraphs of the original graph. 

Pages

SPS ON X

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel