The technology we use, and even rely on, in our everyday lives –computers, radios, video, cell phones – is enabled by signal processing. Learn More »
1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
In this letter, we propose a novel solution to the problem of single image super-resolution at multiple scaling factors, with a single network architecture. In applications where only a detail needs to be super-resolved, traditional solutions must choose to use as input either the low-resolution detail, thus losing the information about the context, or the whole low-resolution image and then crop the desired output detail, which is quite wasteful in terms of computations and storage. To address both of these issues we propose ZoomGAN, a model that takes as input the whole low-resolution image, which we call context, and a binary mask that specifies with a box which image detail in the low-resolution image to magnify. The output of ZoomGAN has the same size as the inputs so that the scaling factor is implicitly defined by the arbitrary size of the mask box. To encourage a realistic and high-quality output, we combine adversarial training with a perceptual loss. We use two discriminators: one promotes the similarity between the distributions of real and generated details and the other promotes the similarity between the distributions of real and generated (detail, context) pairs. We evaluate ZoomGAN with several experiments on several datasets and show that it achieves state of the art performance on zoomed in details in terms of the LPIPS and PI perceptual metrics, while being on par in terms of the PSNR distortion metric. The code will be available at https://github.com/Andyzhang59.
Simage super-resolution (SISR) is the problem of reconstructing a high-resolution (HR) image from a low-resolution (LR) one. Although SISR is an ill-posed inverse problem, as multiple HR reconstructions yield the same LR image, deep learning models have demonstrated the capability to identify likely HR reconstructions by capturing detailed prior information about natural images. These methods have showed a remarkable performance by exploiting convolutional neural networks [1]–[9] and generative adversarial training [4], [10]–[13].
Home | Sitemap | Contact | Accessibility | Nondiscrimination Policy | IEEE Ethics Reporting | IEEE Privacy Policy | Terms | Feedback
© Copyright 2024 IEEE - All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A public charity, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.