Recent Patents in Signal Processing (January 2017) – Emotion Recognition

You are here

Inside Signal Processing Newsletter Home Page

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

News and Resources for Members of the IEEE Signal Processing Society

Recent Patents in Signal Processing (January 2017) – Emotion Recognition

For our January 2017 issue, we cover recent patents granted in the area of emotion recognition. The below patents deal with hardware and software components of up-to-date emotion analysis systems, and various applications of the latest technological developments.

Patent no 9,508,008 introduces a see-through, head mounted display and sensing devices cooperating with the display detect audible and visual behaviors of a subject in a field of view of the device. A processing device communicating with display and the sensors monitors audible and visual behaviors of the subject by receiving data from the sensors. Emotional states are computed based on the behaviors and feedback provided to the wearer indicating computed emotional states of the subject. During interactions, the device, recognizes emotional states in subjects by comparing detected sensor input against a database of human/primate gestures/expressions, posture, and speech. Feedback is provided to the wearer after interpretation of the sensor input.

A method and system for recognizing behavior is disclosed in patent no. 9,489,570, the method includes: capturing at least one video stream of data on one or more subjects; extracting body skeleton data from the at least one video stream of data; computing feature extractions on the extracted body skeleton data to generate a plurality of 3 dimensional delta units for each frame of the extracted body skeleton data; generating a plurality of histogram sequences for each frame by projecting the plurality of 3 dimensional delta units for each frame to a spherical coordinate system having a plurality of spherical bins; generating an energy map for each of the plurality of histogram sequences by mapping the plurality of spherical bins versus time; applying a Histogram of Oriented Gradients (HOG) algorithm on the plurality of energy maps to generate a single column vector; and classifying the single column vector as a behavior and/or emotion.

The invention no. 9,405,962 presents a method for determining a facial emotion of a user in the presence of a facial artifact includes detecting Action Units (AUs) for a first set of frames with the facial artifact; analyzing the AUs with the facial artifact after the detection; registering the analyzed AUs for a neutral facial expression with the facial artifact in the first set of frames; predicting the AUs in a second set of frames; and determining the facial emotion by comparing the registered neutral facial expression with the predicted AUs in the second set of frames.

In patent no. 9,401,097 disclosed are a method and apparatus for providing an emotion expression service using an emotion expression identifier. The method includes collecting emotion evaluations of users about content related to a word or a phrase, the emotion evaluations being performed by the users after the users view the content, and displaying an emotion expression identifier representing the collected emotion evaluations of the users in the vicinity of the word or the phrase. The method and apparatus enable a user to intuitively identify emotion expressions of other users (netizens) in relation to a word or phrase such on a Web page such as a portal site

An approach is provided by patent no. 9,386,139 for accessing services, applications, and content using an emotion-based user interface. Descriptors corresponding to an emotion of the user are presented to the user for selection. Selection of one of the descriptors initiates presentation of options (e.g., actions for accessing service, applications, or content available to a user device) associated with the descriptor.

In patent no. 9,384,189 an apparatus and a method for predicting the pleasantness-unpleasantness index of words are disclosed. The disclosed apparatus includes: a computing unit configured to compute an emotion correlation between a word and one or more comparison word, compute emotion correlations between multiple reference words included in a reference word set and the one or more comparison word, compute multiple first absolute emotion similarity values between the word and the multiple reference words, and compute at least one second absolute emotion similarity value between a reference word and another reference word for all of the reference words included in the reference word set; and a prediction unit configured to predict the pleasantness-unpleasantness index of the word by using the multiple number of first absolute emotion similarity values, the at least one second absolute emotion similarity value, and a preset pleasantness-unpleasantness index of the multiple number of reference words.

One embodiment of the invention no. 9,380,413 provides a system for dynamically forming the content of a message to a user based on a perceived emotion state of the user. During operation, the system determines a geo-location of a user. Next, the system analyzes a news feed associated with the geo-location of the user to determine a perceived emotion state of the user. The system then forms a content for a message to the user based on the perceived emotional state of the user. Finally, the system delivers the message

In one embodiment of patent no. 9,355,651, a method for detecting autism in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination, the computer programmed to execute the method, includes segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality recording segments. The method further includes determining which of the plurality of recording segments correspond to a key child. The method further includes determining which of the plurality of recording segments that correspond to the key child are classified as key child recordings. Additionally, the method includes extracting phone-based features of the key child recordings; comparing the phone-based features of the key child recordings to known phone-based features for children; and determining a likelihood of autism based on the comparing.

If you have an interesting patent to share when we next feature patents related to emotion recognition, or if you are especially interested in a signal processing research field that you would like to highlight in this section, please send email to Csaba Benedek (benedek.csaba AT sztaki DOT mta DOT hu).

References

Number: 9,508,008
Title: Wearable emotion detection and feedback system
Inventors: Jerauld; Robert (Kirkland, WA)
Issued:  November 29, 2016
Assignee: Microsoft Technology Licensing, LLC (Redmond, WA)

Number: 9,489,570
Title: Method and system for emotion and behavior recognition
Inventors:      Cao; Chen (Gainesville, FL), Zhang; Yongmian (Union City, CA), Gu; Haisong (Cupertino, CA)
Issued:  November 8, 2016
Assignee: Konica Minolta Laboratory U.S.A., Inc. (San Mateo, CA)

Number: 9,405,962
Title: Method for on-the-fly learning of facial artifacts for facial emotion recognition
Inventors: Balasubramanian; Anand (Bangalore, IN), Sudha; Velusamy (Bangalore, IN), Viswanath; Gopalakrishnan (Bangalore, IN), Anshul; Sharma (Bangalore, IN), Pratibha; Moogi (Bangalore, IN)
Issued:  August 2, 2016
Assignee: Samsung Electronics Co., Ltd. (Suwon-si, KR)

Number: 9,401,097
Title: Method and apparatus for providing emotion expression service using emotion expression identifier
Inventors: Kim; Jong-Phil (Uiwang-si, KR), Kim; Kwang-Il (Seoul, KR)
Issued: July 26, 2016

Number: 9,386,139
Title: Method and apparatus for providing an emotion-based user interface
Inventors: Knight; Paul Antony (Farnborough, GB)
Issued:  July 5, 2016
Assignee: Nokia Technologies OY (Espoo, FI)

Number: 9,384,189
Title: Apparatus and method for predicting the pleasantness-unpleasantness index of words using relative emotion similarity
Inventors: Lee; Soo Won (Seoul, KR), Lee; Kang Bok (Seoul, KR)
Issued:  July 5, 2016
Assignee: Foundation of Soongsil University--Industry Corporation (Seoul, KR)

Number: 9,380,413
Title: Dynamically forming the content of a message to a user based on a perceived emotion
Inventors: Joshi; Rekha M. (Karnataka, IN)
Issued:  June 28, 2016
Assignee: Intuit INC. (Mountain View, CA)

Number: 9,355,651
Title: System and method for expressive language, developmental disorder, and emotion assessment
Inventors: Xu; Dongxin D. (Boulder, CO), Paul; Terrance D. (Boulder, CO)
Issued:  May 31, 2016
Assignee: Lena Foundation (Boulder, CO)

Table of Contents:

SPS on Twitter

  • DEADLINE EXTENDED: The 2023 IEEE International Workshop on Machine Learning for Signal Processing is now accepting… https://t.co/NLH2u19a3y
  • ONE MONTH OUT! We are celebrating the inaugural SPS Day on 2 June, honoring the date the Society was established in… https://t.co/V6Z3wKGK1O
  • The new SPS Scholarship Program welcomes applications from students interested in pursuing signal processing educat… https://t.co/0aYPMDSWDj
  • CALL FOR PAPERS: The IEEE Journal of Selected Topics in Signal Processing is now seeking submissions for a Special… https://t.co/NPCGrSjQbh
  • Test your knowledge of signal processing history with our April trivia! Our 75th anniversary celebration continues:… https://t.co/4xal7voFER

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel