Deep Learning for Camera Autofocus

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Deep Learning for Camera Autofocus

By: 
Chengyu Wang; Qian Huang; Ming Cheng; Zhan Ma; David J. Brady

Most digital cameras use specialized autofocus sensors, such as phase detection, lidar or ultrasound, to directly measure focus state. However, such sensors increase cost and complexity without directly optimizing final image quality. This paper proposes a new pipeline for image-based autofocus and shows that neural image analysis finds focus 5-10x faster than traditional contrast enhancement. We achieve this by learning the direct mapping between an image and its focus position. In further contrast with conventional methods, AI methods can generate scene-based focus trajectories that optimize synthesized image quality for dynamic and three dimensional scenes. We propose a focus control strategy that varies focal position dynamically to maximize image quality as estimated from the focal stack. We propose a rule-based agent and a learned agent for different scenarios and show their advantages over other focus stacking methods.

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel