Listening in noisy reverberant environments can be challenging. The recent emergence of hearable devices, such as smart headphones, smart glasses and virtual/augmented reality headsets, presents an opportunity for a new class of speech and acoustic signal processing algorithms which use multimodal sensor data to compensate for, or even exploit, changes in head orientation.