Skip to main content

NEWS AND RESOURCES FOR MEMBERS OF THE IEEE SIGNAL PROCESSING SOCIETY

From Hardware to Intelligence: How AI and Scalable Compute Enable Leaner Instruments

Image of circuit board with brain

Interview with Michael S. Hansen (General Manager in Medical Imaging, Health Futures - Microsoft) by Ivan J. Tashev (editor in chief of IEEE SPS Industry Signals Newsletter)

Hi Michael, thank you for agreeing to talk with me about how the connected instruments mark a shift from hardware-heavy design to intelligence-driven systems, where scalable AI and cloud computing replace complexity with leaner, smarter, and more efficient instrumentation. Can you tell us a little about your background? 

Thank you for taking the time to learn about our work. I have had two main phases in my career. 

The first was a more traditional academic path. I completed my PhD in MRI signal processing and image reconstruction, spent some time as a postdoc at UCL in London, and later led a lab at the National Institutes of Health, where I focused on fast MRI techniques. This experience taught me how to produce the best possible images from the least amount of data. 

About nine years ago, I decided to take a break from research and joined Microsoft, where I held roles in the commercial side of the company, working with large customers. I also worked on healthcare data products, and more recently returned to a research-focused role in Health Futures at Microsoft Research. Here, I focus on the tradeoff between expensive instrumentation and compute. This has been a consistent theme throughout my career: identifying computational techniques that can overcome limitations in or corruption of raw data. AI is now providing us with a powerful new set of tools for this.  

 

You often describe imaging as moving from hardware-constrained systems to compute-rich ones—what does an “ideal” connected imaging instrument look like if compute were truly unbounded?

The idea behind connected instruments is to rethink which hardware components are truly necessary when unbounded compute is available. For example, we work on ultra-low-field MRI systems that do not require large superconducting magnets, cryogens, or a highly reliable power supply. We compensate for the lower data quality by using models that can reconstruct and denoise the signal. In essence, an ideal connected instrument is a low-cost system that can be deployed in settings where such technology is currently not feasible.

 

What are the key trade-offs between leaner hardware—such as low-field MRI—and the increased reliance on AI and scalable compute to recover image quality and clinical utility? 

Leaner and lower-cost hardware will depend on access to computational power. To keep costs down, this compute infrastructure should not be deployed with the instrument. For low-cost systems, the duty cycle of the computational hardware is relatively low, so it is more efficient to treat it as a shared resource, for example, in the cloud. For this to work, reliable network connectivity between the instrument and the cloud is required. While this is becoming less of a constraint, important tradeoffs related to bandwidth and security must still be carefully considered.

 

How critical is access to raw imaging data (e.g., k-space in MRI) for unlocking the full potential of AI, and what are the biggest barriers to making such data widely usable across vendors and institutions? 

Access to raw data is often critical. By the time a scanner produces a DICOM image, the data have typically been reduced by orders of magnitude. MRI raw data, for instance, is multi-channel, multi-dimensional, complex-valued data. Accessing raw data is currently a challenge: there are no requirements for instrument manufacturers to provide access to the raw signal. However, emerging community standards (e.g., in MRI and PET imaging) are beginning to address this, and vendors are starting to support them.

 

As image reconstruction, analysis, and even diagnosis become part of an end-to-end AI pipeline, how do you see the boundaries shifting between acquisition, reconstruction, and clinical decision-making? 

The boundaries between image reconstruction and image interpretation are increasingly blurring. As mentioned earlier, raw data provide a richer representation of the experiment, whereas images are primarily designed for a human interface—our eyes. It is therefore likely that, as AI becomes more involved in extracting clinical information, it will operate directly on raw data and bypass image formation altogether. However, for the foreseeable future, we will continue to generate images to allow humans to understand and interpret the findings.

 

What are the main challenges in bringing cloud-enabled imaging—from research prototypes to clinical deployment, particularly in terms of validation, workflow integration, and trust from clinicians?

In terms of image production, the main challenge is ensuring that the images generated are reliable and trustworthy. We need better methods to quantify uncertainty in pixel values. When interpreting images, it is essential to ensure that findings are properly grounded in the data and clearly communicated to clinicians. Finally, when using remote computational resources—which are critical for providing the required computational power—we must adopt established patterns that ensure security and compliance, meeting the needs of healthcare systems while protecting patient data.

Michael, on behalf of the readers of IEEE SPS Industry Signals newsletter I would like to thank you for taking the time to answer our questions. 

The pleasure was mine, Ivan.

 

Image of Michael S HansenMichael S. Hansen is a General Manager in Medical Imaging, Health Futures - Microsoft, where he focuses on applying machine learning and scalable cloud computing to process raw instrument signals, particularly in computational biomedical imaging such as MRI and CT. His work bridges hardware and intelligence by developing methods that enable advanced imaging capabilities through software and large-scale compute rather than increasingly complex instruments. With a PhD in biomedical engineering and prior academic and research roles, including at National Heart, Lung, and Blood Institute, he has contributed extensively to image reconstruction, data standards, and AI-driven imaging pipelines, helping shape a new generation of connected, compute-enabled medical instruments.

 

 

Image of Ivan J. TashevIvan J. Tashev is a Partner Software Architect at Microsoft Research in Redmond, where he leads the Audio and Acoustics Research Group and works at the intersection of signal processing, artificial intelligence, and human–computer interaction. His research focuses on audio signal processing, spatial audio, speech enhancement, and brain–computer interfaces, contributing to both scientific advances and real-world systems. Over his career, he has played a key role in advancing microphone array technology and speech processing, earning recognition as an IEEE Fellow and receiving the IEEE Signal Processing Society Industrial Innovation Award. He is also an active member of the research community, contributing to publications, conferences, and industry initiatives while helping translate cutting-edge research into impactful technologies.