Sam 2010 - Plenary Speakers
University of Illinois, Urbana-Champaign, USA
"Parallel Magnetic Resonance Imaging: a Multi-Channel Signal Processing Perspective"
Magnetic resonance imaging (MRI) is one of the leading diagnostic imaging modalities. While providing excellent spatial resolution and exquisite soft tissue contrast, MRI suffers from slow acquisition. One of the highly effective approaches developed to address this limitation, is parallel imaging with phased-array coils. However,the freedom in acquisition, modeling, coil calibration, and reconstruction is often dealt with in a heuristic way. In this talk we provide a signal processing perspective on these problems, emphasizing the multichannel structure. We show that this perspective provides some interesting variations with improved performance.
LTCI, TELECOM Paris, France
"A Data Processing Pipeline for the Cosmic Microwave Background"
At the Sun-Earth Lagrange point L2, 1.5e6 km away from Earth, an array of 63 sensors aboard the Planck satellite is scanning the sky, patiently measuring to unprecedented resolution and sensitivity the micro-Kelvin fluctuations of the Cosmic Microwave Background temperature and polarization. Getting from there to building multi-million-pixel spherical maps of the microwave sky in 9 frequency channels, to reconstructing the history of our Universe is a story in technology, cosmology and... challenging signal processing. This talk will highlight some of the key steps of the data processing pipeline being developed for the Planck space mission of ESA.
University of Michigan, Ann Arbor, USA
"Performance-Driven Information Fusion"
Information fusion involves combining different information sources using models for the joint source distributions. It is a key component of multichannel sensor processing when there are multiple sensing modalities. Practical information fusion algorithms must approximate information theoretic quantities such as entropy and mutual information from finite number of samples from the sensors. Recently we have developed a framework, called performance-driven information fusion, that specifically accounts for the effect of finite sample estimation errors and bias on the information fusion task. The cornerstone for this framework is a large sample analysis of bias, variance, and probability distribution that applies to a general class of information divergence measures including /Csisz\'ar's / f-divergence, Shannon's mutual information, and R\'enyi's entropy. Under this framework information fusion algorithms can be implemented that incorporate error control, and for which one can optimize feature selection and specify optimal tuning parameters such as kernel bandwidth. This talk will introduce this framework and apply it to several applications in multichannel sensor processing.
Technion-Israel Institute of Technology, Israel
"An Information Theoretic View of Robust Cooperation/Relaying in Wireless Networks,"
In many wireless networks, cooperation, in the form of relaying, takes place over out-of- band spectral resources. Examples are ad hoc networks in which multiple radio interfaces are available for communications or cellular systems with (wireless or wired) backhaul links. In an overview from an information-theoretic standpoint, we put emphasis on robust processing and cooperation via out-of-band links for both ad hoc and cellular networks. Specifically, we focus on robust approaches and practical aspects such as imperfect information regarding the channel state and the codebooks (modulation, coding) shared by transmitters and receivers.
First, we address cooperation scenarios with perfect channel state information and investigate the impact of lack of information regarding the codebooks (oblivious processing) on basic relay channels and cellular systems with cooperation among base stations. Then, similar models are examined in the absence of perfect channel state information. Robust coding strategies are designed based on 'variable-to-fixed' channel coding concepts (the broadcast coding approach, or unequal error protection codes). The effectiveness of such strategies are discussed for multirelay channels and cellular systems overlaid with femtocell hotspots.
Based on joint studies with E. Erkip, A. Goldsmith, D. Gunduz, H. V. Poor, A. Sanderovich, O. Simeone, O. Somekh.
Alle-Jan van der Veen
TU Delft, The Netherlands
"Calibration Challenges for Large Radio Telescope Arrays"
Radio astronomy is known for its very large telescope dishes, but currently there is a transition towards the use of large numbers of small elements. E.g., the recently commissioned LOFAR low frequency array uses 50 stations each with some 200 antennas, and the numbers will be even larger for the Square Kilometer Array, planned for 2020. Meanwhile some of the existing telescope dishes are being retrofitted with focal plane arrays. These instruments pose interesting challenges for array signal processing. One aspect, which we cover in this talk, is the calibration of such large numbers of antennas, especially if they are distributed over a wide area.
Apart from the unknown element gains and phases (which may be directionally dependent), there is the unknown propagation through the ionosphere, which at low frequencies may be diffractive and different over the extent of the array. The talk will discuss several of the challenges, present the underlying data models, and propose some of the answers. We will also touch upon a recent initiative to develop a low-frequency telescope array in space, on a distributed platform formed by a swarm of nanosatellites.
Anthony. J. Weiss
Tel Aviv University, Israel
"Direct Position Determination and Sparsity in Localization Problems"
The most common methods for location of communications/radar transmitters are based on measuring a specified parameter such as signal Angle of Arrival (AOA), Time of Arrival (TOA), Received Signal Strength (RSS) or Differential Doppler (DD). The measured parameters are then used to estimate the transmitter location. Since the AOA/TOA/RSS/DD measurements are done independently, without using the constraint that all measurements must correspond to the same transmitter, the location estimate is suboptimal. Optimal localization is obtained by a single step which uses all the observations together in order to estimate the emitter position. We refer to single-step localization as Direct Position Determination (DPD). Although this principle is known for long time the signal processing community overlooked its potential benefits for long time. In this talk we will compare the DPD with two-step algorithms. We will show and explain why under ideal conditions such as high SNR the DPD is equivalent to two-step algorithms. However, under low SNR, jamming and other interferences the DPD provide better results. Further, we will show that DPD can overcome well known limitations on the number of sources associated with AOA.
In the second part of the talk we will show how we can harness recent developments in sparsity theory to handle outliers in localization measurements. Surprisingly, under known limitations on the number of outliers, we can obtain the exact emitter location. Further, sparsity can also be used to find the location of sources by efficient linear programming or Second Order Cone programming.