1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
10 years of news and resources for members of the IEEE Signal Processing Society
For our May 2017 issue, we cover recent patents dealing with signal processing applications of neural networks. Patent no 9,627,532 presents methods and apparatus for training a multi-layer artificial neural network for use in speech recognition. The method comprises determining for a first speech pattern of the plurality of speech patterns, using a first processing pipeline, network activations for a plurality of nodes of the artificial neural network in response to providing the first speech pattern as input to the artificial neural network, determining based, at least in part, on the network activations and a selection criterion, whether the artificial neural network should be trained on the first speech pattern, and updating, using a second processing pipeline, network weights between nodes of the artificial neural network based, at least in part, on the network activations when it is determined that the artificial neural network should be trained on the first speech pattern. A system for prospectively identifying media characteristics for inclusion in media content is disclosed in patent no. 9,619,747. A neural network database including media characteristic information and feature information may associate relationships among the media characteristic information and feature information. Personal characteristic information associated with target media consumers may be used to select a subset of the neural network database. A first set of nodes, representing selected feature information, may be activated. The node interactions may be calculated to detect the activation of a second set of nodes, the second set of nodes representing media characteristic information. Generally, a node is activated when an activation value of the node exceeds a threshold value. Media characteristic information may be identified for inclusion in media content based on the second set of nodes. Neural network image curation techniques are described in patent no. 9,613,058. In one or more implementations, curation is controlled of images that represent a repository of images. A plurality of images of the repository are curated by one or more computing devices to select representative images of the repository. The curation includes calculating a score based on image and face aesthetics, jointly, for each of the plurality of images through processing by a neural network, ranking the plurality of images based on respective said scores, and selecting one or more of the plurality of images as one of the representative images of the repository based on the ranking and a determination that the one or more said images are not visually similar to images that have already been selected as one of the representative images of the repository. As presented in patent no. 9,607,616 a spoken language understanding (SLU) system receives a sequence of words corresponding to one or more spoken utterances of a user, which is passed through a spoken language understanding module to produce a sequence of intentions. The sequence of words are passed through a first subnetwork of a multi-scale recurrent neural network (MSRNN), and the sequence of intentions are passed through a second subnetwork of the multi-scale recurrent neural network (MSRNN). Then, the outputs of the first subnetwork and the second subnetwork are combined to predict a goal of the user. Techniques related to implementing neural networks for speech recognition systems are discussed in patent no. 9,520,128. Such techniques may include implementing frame skipping with approximated skip frames and/or distances on demand such that only those outputs needed by a speech decoder are provided via the neural network or approximation techniques In patent no. 9,501,724 a convolutional neural network (CNN) is trained for font recognition and font similarity learning. In a training phase, text images with font labels are synthesized by introducing variances to minimize the gap between the training images and real-world text images. Training images are generated and input into the CNN. The output is fed into an N-way softmax function dependent on the number of fonts the CNN is being trained on, producing a distribution of classified text images over N class labels. In a testing phase, each test image is normalized in height and squeezed in aspect ratio resulting in a plurality of test patches. The CNN averages the probabilities of each test patch belonging to a set of fonts to obtain a classification. Feature representations may be extracted and utilized to define font similarity between fonts, which may be utilized in font suggestion, font browsing, or font recognition applications. In the invention no. 9,456,174, an automated video editing system uses user inputs and metadata combined with machine learning technology to gradually improve editing techniques as more footage is edited. The system is designed to work primarily with a network of automated video recording systems that use cooperative tracking methods. The system is also designed to improve tracking algorithms used in cooperative tracking and to enable systems to begin using image recognition based tracking when the results of machine learning are utilized. Patent no. 9,430,829 presents one example apparatus associated with detecting mitosis in breast cancer pathology images by combining handcrafted (HC) and convolutional neural network (CNN) features in a cascaded architecture includes a set of logics that acquires an image of a region of tissue, partitions the image into candidate patches, generates a first probability that the patch is mitotic using an HC feature set and a second probability that the patch is mitotic using a CNN-learned feature set, and classifies the patch based on the first probability and the second probability. If the first and second probabilities do not agree, the apparatus trains a cascaded classifier on the CNN-learned feature set and the HC feature set, generates a third probability that the patch is mitotic, and classifies the patch based on a weighted average of the first probability, the second probability, and the third probability. If you have interesting patents related to neural networks or any other aspects of signal processing that can be shared with our community, or if you are especially interested in signal processing and you would like to be highlighted in this section, please email Csaba Benedek (benedek.csaba AT sztaki DOT mta DOT hu). References Number: 9,627,532 Title: Methods and apparatus for training an artificial neural network for use in speech recognition Inventors: Gemello; Roberto (Alpignano, IT), Mana; Franco (Turin, IT), Albesano; Dario (Venaria Reale, IT) Issued: April 18, 2017 Assignee: Nuance Communications, Inc. (Burlington, MA) Number: 9,619,747 Title: Prospective media content generation using neural network modeling Inventors: Bhatt; Meghana (Aliso Viejo, CA), Payne; Rachel (Aliso Viejo, CA) Issued: April 11, 2017 Assignee: Fem, Inc. (Aliso Viejo, CA) Number: 9,613,058 Title: Neural network image curation control Inventors: Shen; Xiaohui (San Jose, CA), Lu; Xin (State College, PA), Lin; Zhe (Fremont, CA), Mech; Radomir (Mountain View, CA) Issued: April 4, 2017 Assignee: Adobe Systems Incorporated (San Jose, CA) Number: 9,607,616 Title: Method for using a multi-scale recurrent neural network with pretraining for spoken language understanding tasks Inventors: Watanabe; Shinji (Arlington, MA), Luan; Yi (Seattle, WA), Harsham; Bret (Newton, MA) Issued: March 28, 2017 Assignee: Mitsubishi Electric Research Laboratories, Inc. (Cambridge, MA) Number: 9,520,128 Title: Frame skipping with extrapolation and outputs on demand neural network for automatic speech recognition Inventors: Bauer; Josef (Munich, DE), Rozen; Piotr (Gdansk, PL), Stemmer; Georg (Munich, DE) Issued: December 13, 2016 Assignee: Intel Corporation (Santa Clara, CA) Number: 9,501,724 Title: Font recognition and font similarity learning using a deep neural network Inventors: Yang; Jianchao (San Jose, CA), Wang; Zhangyang (Urbana, IL), Brandt; Jonathan (Santa Cruz, CA), Jin; Hailin (San Jose, CA), Shechtman; Elya (Seattle, WA), Agarwala; Aseem Omprakash (Seattle, WA) Issued: November 22, 2016 Assignee: Adobe Systems Incorporated (San Jose, CA) Number: 9,456,174 Title: Neural network for video editing Inventors: Boyle; Christopher T. (San Antonio, TX), Sammons; Alexander G. (San Antonio, TX), Taylor; Scott K. (San Antonio, TX) Issued: September 27, 2016 Assignee: H4 Engineering, Inc. (San Antonio, TX) Number: 9,430,829 Title: Automatic detection of mitosis using handcrafted and convolutional neural network features Inventors: Madabhushi; Anant (Beachwood, OH), Wang; Haibo (Cleveland Heights, OH), Cruz-Roa; Angel (Bogota, CO), Gonzalez; Fabio (Bogota, CO) Issued: Case Western Reserve University (Cleveland, OH) Assignee: August 30, 2016
© Copyright 2019 IEEE – All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.