SPS SA-TWG Webinar: Realtime Acoustics and EM Propagation in Complex Environments

Date: 18 September 2024
Time: 10:00 AM ET (New York Time)
Speaker(s): Dinesh Manocha

This webinar is the next in a series by the IEEE Synthetic Aperture Technical Working Group (SA-TWG)

Abstract

Ray tracing models have been widely used for acoustic and electromagnetic (EM) simulations.  Sound propagation corresponds to the process through which sound energy emitted by a speaker travels through the air as sound waves. This boils down to computing the room impulse responses (RIRs), which are influenced by the positions of the source and listener, the room's geometry, and its materials. Many applications, such as gaming, virtual reality, and computer-aided design, need capabilities for real-time sound propagation.  EM simulations have been used to design and deploy conventional radio systems and field prediction for network planning and localization. Many applications, like 5G, autonomous vehicles, and traffic systems, need capabilities for dynamic ray tracing, modeling EM wave paths, and their interactions with moving objects. This leads to many challenges in complex urban areas due to environmental variability, data scarcity, and computational needs.

We overview our recent work on real-time acoustic and EM propagation in complex, dynamic environments. This includes dynamic coherence-based ray tracing simulations considering specular, diffuse, and diffraction effects. Our approach enhances efficiency by improving the recomputation of bounding volume hierarchy (BVH) and caching propagation paths. With our formulation, we've observed a significant reduction in computation time while maintaining an accuracy comparable to that of other simulators. Our approach can also model channel coherence, spatial consistency, and the Doppler effect.

We also present novel learning-based acoustic and EM simulators that can provide one or two orders of performance improvement over ray-tracing-based simulators. We leverage a modified conditional Generative Adversarial Network (cGAN) incorporating encoded geometry and transmitter location. Using our learning architectures, we demonstrate better efficiency and accuracy of simulations in various indoor environments.

We demonstrate the benefits of our improved simulators for various applications. Our generated RIRs outperform interactive ray-tracing simulators in speech-processing applications, including Automatic Speech Recognition (ASR), Speech Enhancement, Speech Separation, and interactive walkthroughs.  In dynamic urban scenes, we demonstrate the benefits of EM simulators to vast areas and multiple receivers with maintained accuracy and efficiency compared to prior methods; for complex geometries and indoor environments, we compare the accuracy with analytical solutions and existing EM ray tracing systems.

Biography

Dinesh Manocha is Paul Chrisman-Iribe Chair in Computer Science & ECE and a Distinguished University Professor at the University of Maryland College Park. His research interests include virtual environments, audio, physically-based modeling, and robotics. His group has developed multiple software packages that are standard and licensed to 60+ commercial vendors. He has published more than 790 papers & supervised 50 PhD dissertations. He is a Fellow of AAAI, AAAS, ACM, IEEE, and NAI, a member of ACM SIGGRAPH and IEEE VR Academies, and a Bézier Award from the Solid Modeling Association. He received the Distinguished Alumni Award from IIT Delhi and the Distinguished Career in Computer Science Award from the Washington Academy of Sciences. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which Valve Inc acquired in November 2016.