Transfer your Font Style Using Multi-Content GAN

You are here

Inside Signal Processing Newsletter Home Page

10 years of news and resources for members of the IEEE Signal Processing Society

Transfer your Font Style Using Multi-Content GAN

Researchers from UC Berkeley have developed a deep learning model that can automatically transfer your font style. The model is named multi-content generative adversarial network (MC-GAN). It consists of a stacked conditional generative adversarial network (cGAN) architecture to predict the coarse glyph shapes and an ornamentation network to predict color and texture of the final glyphs. As an example, it can generate a new title with the same style from a movie poster.

Courtesy of the researchers

This is not the first time deep learning has been used for style transfer. For example, convolutional neural networks (CNN) can be used for image style transfer (see https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf).

For more details about MC-GAN, please visit their blog at http://bair.berkeley.edu/blog/2018/03/13/mcgan/.

SPS on Facebook

SPS on Twitter

  • Not sure of machine learning’s impact on your day-to-day life? This should convince you otherwise:… https://t.co/YC4em95Xon
  • For AI to evolve in B2B sales, data accuracy is critical. https://t.co/JtPkrxgH2a
  • Still not convinced of AI and machine learning’s impact on the world? Here are 27 incredible examples of them in pr… https://t.co/yTDSYBnuyo

SPS Videos


Signal Processing in Home Assistants

 


Multimedia Forensics


Careers in Signal Processing             

 


Under the Radar