Multiresolution Signal Processing In Digital Cinema

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Multiresolution Signal Processing In Digital Cinema

Wednesday, 23 May, 2018
By: 
Daryoush H. Razi, PhD

This past Star Wars Day (May the Fourth), fans had reason to celebrate. The new prequel, Solo: A Star Wars Story, was less than a week away from release, and more broadly, the entire franchise is undergoing a renaissance with new films more faithful in style and storyline to the beloved original trilogy. But beyond the actors and writers, Star Wars fans should be thanking the technology that makes such films possible: signal processing. Digital cinema, of which the Star Wars films are prime examples, is a technological innovation that is enabling a new generation of breathtaking epics searing themselves into our personal and collective memories. But to capture digital images, motion picture producers must utilize several different signal processing technologies, including multiresolution image processing, visual special effects creation, editing and color calibration. Few realize how complicated it is to pull all these technologies together to create art.

A limited number of companies offer 8K digital cameras, which feature resolutions that exceed the flexibility and beauty offered by normal film: 17 times more than HD and more than four times than that of 4K cameras. Camera sensors for converting photons to electrons featuring 8K resolution (8192 × 4320 pixels) are also available. The device allows raw image capture, tunable file sizes for superior image quality and enables a raw workflow with non-destructive editing provided with metadata.

RED Digital Cinema, as well as other companies, offer an enhanced image processing pipeline that is available in-camera as firmware that can be updated as technology introduces new multiresolution signal and image processing operators, featuring enhancements and improved algorithms to deliver raw image data. Creative filmmakers can capture the most vivid colors and the highest levels of dynamic range, and achieve the best picture quality.

In addition to a film camera, there might be three or four different digital cameras in use, all recording to different devices’ data formats. Cameras’ outputs are delivered without color correction. The resulting picture is somewhat washed-out looking and requires a color-correction system. Input colorimetry measured by professional digital cameras records images using not only proprietary raw file formats (including camera specific algorithms), but also imaging and colorimetries designed for camera sensors’ characteristics. 

Digital Cinema image

Each technology produces the output in its raw format, which requires a universal format in which the film can be preserved. This is so the film can be continually remastered as the multiresolution technology evolves for future display devices.

The Academy of Motion Picture Arts and Sciences has released a software technology, The Academy Color Encoding System or ACES [1], which is becoming the industry standard for managing color throughout the life cycle of a motion picture, archiving and future remastering. ACES ensures a consistent color experience that preserves the filmmaker’s creative vision. It’s a free, open device-independent color management and image interchange system that can be applied to almost any current or future workflow.

ACES provides digital image encoding and other specifications that preserve the latitude and color range of the original imagery, allowing the highest-quality images possible from the cameras and processes used. It establishes a common standard so deliverables can be efficiently created and preserved.

The Scene Referred stage of ACES begins with importing the device’s output with all the specifics of the capture medium, to get as close as possible to the original light of the scene exposure. This is made possible by writing ACES specification by each device manufacturer. This operation is referred to as Input Device Transform.

ACES includes OpenEXR, which is a deep raster format and used in Computer Generated Imagery (CGI), Visual Special Effects (VFX) and animations. Since it can store different channels in one file, it takes away the need to store this information in separate files. To create a future-proof “Digital Source Master,” ACES utilizes a portable programming language.

ACES provides support for many channels, a wider range of colors than that of the entire spectrum capable of being seen by the human eye, 16-bit color bit depth per pixel and the ability to hold dynamic range. For each output device, there is a specific Output Device Transform so that workflow doesn’t have to be changed.

The next step is to perform creative editing and import VFX, CGI and animation layers into ACES via their Input Device Transforms.

Color grading is the process of enhancing the color of a motion picture. It includes both color correction and the generation of artistic color effects. Camera image data acquired by sensors must be transformed from these values to new ACES pixels. The display device and the viewing conditions of the transformation differ from the display viewing conditions. ACES Viewing Transform stage begins with Look Modification Transformation to create “the look” from the results of the above processes.

Each stage could be in a different resolution. Therefore, multiresolution compositing of all input layers begins the ACES Scene Reference Rendering Transformation, in which values begin their transformation to a display output format with significantly higher quality images. This is intended to be the universal standard for transforming ACES images from their Scene Referred values to high, dynamic range output Referred Values. The images are rendered but are not optimized for any one output format, and so require an Output Display Transformation. This is accomplished by writing ACES specifications by each output device manufacturer at the very end of the chain.

So whether you’re a Star Wars fan or your genre is something totally different, chances are your favorite film is made possible by signal processing. Next time you’re watching your favorite movie, make sure to thank the signal processing engineers who made it all happen. Or maybe even consider exploring this valued and exciting career path.

References

  1. http://www.oscars.org/science-technology/sci-tech-projects/aces

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel