Learning the MMSE Channel Estimator

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Learning the MMSE Channel Estimator

Thursday, 22 July, 2021
By: 
David Neumann, Thomas Wiese, Wolfgang Utschick

Contributed by David Neumann, Thomas Wiese and Wolfgang Utschick and based on the article, Learning the MMSE Channel Estimator, published in the IEEE Transactions on Signal Processing, vol. 66, pp. 2905-2917, 2018, and the SPS webinar by the same title, Learning the MMSE Channel Estimator, available on the SPS Resource Center.

Accurate channel estimation is a major challenge in the next generation of wireless communication networks. To fully exploit setups with many antennas, estimation errors must be kept small. This can be achieved by exploiting the structure inherent in the channel vectors. For example, line-of-sight paths result in highly correlated channel coefficients. In realistic scenarios, such a correlation structure is present but rather difficult to exploit ad hoc, since not only the channel realization itself but also its statistical properties are random variables, for example depending on the position of transmitters and receivers in the scenario.

Let us assume for a moment that we know the "correct" stochastic channel model for our system, for example conditioned on a particular position in the scenario. We could go ahead and calculate the minimum mean squared error (MMSE) channel estimator with respect to that model. This estimator is given as the expected value of the channel vector with respect to the conditional channel distribution. In particular, it is a *function* that maps the training data to the channel estimate, and this function is determined by the channel model and its parameters. For example, with Gaussian channels, the estimator is a linear function of the observations, i.e., given as a matrix-vector multiplication. The corresponding matrix needs to be calculated only once and depends on the parameters of the distribution, e.g., its covariance, given we know these parameters of the conditional stochastic channel model. In many practical cases, however, we do not know these conditional models, but only their priors. In these cases, the channel estimator becomes a nonlinear function of the observations, such that the estimation matrix then results as an integral of weighted matrices of the linear case, where the weights are functions of the observations itself. Such a solution is generally very demanding in terms of numerical complexity and often turns out to be intractable.

Instead of calculating the MMSE estimator analytically, we can find it numerically: We first generate a set of training data - channel vectors and observations - from our conditional channel model or through a long-term collection of measurements and then use an optimization algorithm to compute the function that minimizes the empirical squared estimation error, i.e., the empirical risk. The numerical approach has advantages compared to the analytical approach: In the conditional Gaussian example, we could restrict the search space even further to obtain a matrix with which we can express the matrix-vector multiplication as a convolution. The resulting estimator would be optimal in the class of all convolutional estimators of a given dimension. For this estimator, no closed-form analytical solution is available. For non-Gaussian models, it is often not possible to compute the analytical solution at all and the numerical approach is the only option.

Function approximation is also the subject of Deep Learning and it stands to reason that tools from this field can also be used for channel estimation. The design of the neural network determines the class of functions that can be used for the estimator. We have shown in [1] that under simplifying conditions, the MMSE estimator for the conditional normal channel model closely resembles the architecture of a typical neural network with two layers and a "softmax" activation function. By means of numerical optimization, a learning algorithm, the errors introduced by the simplifying assumptions are then reduced so that we obtain an estimator that represents a good compromise between numerical and statistical efficiency.

Using a realistic model as the starting point for the design of a neural network is one way to exploit tools from deep learning without foregoing all domain knowledge. In addition to network design, we found such knowledge also helpful to craft an initialization strategy for the learning algorithm. More generally, we believe that there are other challenging problems in communications engineering for which deep learning could be a powerful new ingredient.


References:

[1] D. Neumann, T. Wiese, and W. Utschick, “Learning the MMSE channel estimator,” IEEE Trans. on Signal Process., vol. 66, pp. 2905–2917, 2018, doi: 10.1109/TSP.2018.2799164.

 

 

 

SPS on Twitter

  • DEADLINE EXTENDED: The 2023 IEEE International Workshop on Machine Learning for Signal Processing is now accepting… https://t.co/NLH2u19a3y
  • ONE MONTH OUT! We are celebrating the inaugural SPS Day on 2 June, honoring the date the Society was established in… https://t.co/V6Z3wKGK1O
  • The new SPS Scholarship Program welcomes applications from students interested in pursuing signal processing educat… https://t.co/0aYPMDSWDj
  • CALL FOR PAPERS: The IEEE Journal of Selected Topics in Signal Processing is now seeking submissions for a Special… https://t.co/NPCGrSjQbh
  • Test your knowledge of signal processing history with our April trivia! Our 75th anniversary celebration continues:… https://t.co/4xal7voFER

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel