Emotion Recognition

A Pattern Analysis Approach
 
 
Wiley (Verlag)
  • erschienen am 29. Dezember 2014
  • |
  • 584 Seiten
 
E-Book | ePUB mit Adobe DRM | Systemvoraussetzungen
978-1-118-91060-3 (ISBN)
 
A timely book containing foundations and current research directions on emotion recognition by facial expression, voice, gesture and biopotential signals
This book provides a comprehensive examination of the research methodology of different modalities of emotion recognition. Key topics of discussion include facial expression, voice and biopotential signal-based emotion recognition. Special emphasis is given to feature selection, feature reduction, classifier design and multi-modal fusion to improve performance of emotion-classifiers.
Written by several experts, the book includes several tools and techniques, including dynamic Bayesian networks, neural nets, hidden Markov model, rough sets, type-2 fuzzy sets, support vector machines and their applications in emotion recognition by different modalities. The book ends with a discussion on emotion recognition in automotive fields to determine stress and anger of the drivers, responsible for degradation of their performance and driving-ability.
There is an increasing demand of emotion recognition in diverse fields, including psycho-therapy, bio-medicine and security in government, public and private agencies. The importance of emotion recognition has been given priority by industries including Hewlett Packard in the design and development of the next generation human-computer interface (HCI) systems.
Emotion Recognition: A Pattern Analysis Approach would be of great interest to researchers, graduate students and practitioners, as the book
* Offers both foundations and advances on emotion recognition in a single volume
* Provides a thorough and insightful introduction to the subject by utilizing computational tools of diverse domains
* Inspires young researchers to prepare themselves for their own research
* Demonstrates direction of future research through new technologies, such as Microsoft Kinect, EEG systems etc.
weitere Ausgaben werden ermittelt
Amit Konar is a Professor of Electronics and Tele-Communication Engineering, Jadavpur University, India, where he offers graduate-level courses on Artificial Intelligence and directs research in Cognitive Science, Robotics and Human-Computer Interfaces. Dr. Konar is the recipient of many prestigious grants and awards and is an author of 10 books and over 350 research publications. He offered consultancy services to Government and private industries. He served editorial services to many journals, including IEEE Transactions on Systems, Man and Cybernetics (Part-A) and IEEE Transactions on Fuzzy Systems.
Aruna Chakraborty is an Associate Professor with the Department of Computer Science and Engineering, St. Thomas' College of Engineering and Technology, India. She is also a Visiting Faculty with Jadavpur University, where she offers graduate-level courses on Intelligent Automation and Robotics, and Cognitive Science. Her research interest includes human-computer interfaces, emotional intelligence and reasoning with fuzzy logic.

PREFACE


Emotion represents a psychological state of the human mind. Researchers from different domains have diverse opinions about the developmental process of emotion. Philosophers believe that emotion originates as a result of substantial (positive or negative) changes in our personal situations or environment. Biologists, however, consider our nervous and hormonal systems responsible for the development of emotions. Current research on brain imaging reveals that the cortex and the subcortical region in the frontal brain are responsible for the arousal of emotion. Although there are conflicts in the developmental process of emotion, experimental psychologists reveal that a change in our external or cognitive states carried by neuronal signals triggers our hormonal glands, which in turn excites specific modules in the human brain to develop a feeling of emotion.

The arousal of emotion is usually accompanied with manifestation in our external appearance, such as changes in facial expression, voice, gesture, posture, and other physiological conditions. Recognition of emotion from its external manifestation often leads to inaccurate inferences particularly for two reasons. First, the manifestation may not truly correspond to the arousal of the specific emotion. Second, measurements of external manifestation require instruments of high precision and accuracy. The first problem is unsolvable in case the subjects over which experiments are undertaken suppress their emotion, or pretend to exhibit false emotion. Presuming that the subjects are conducive to the recognition process, we only pay attention to the second problem, which can be solved by advanced instrumentation.

This single volume on Emotion Recognition: A Pattern Analysis Approach provides through and insightful research methodologies on different modalities of emotion recognition, including facial expression, voice, and biopotential signals. It is primarily meant for graduate students and young researchers, who like to initiate their doctoral/MS research in this new discipline. The book is equally useful to professionals engaged in the design/development of intelligent systems for applications in psychotherapy and human-computer interactive systems. It is an edited volume written by several experts with specialized knowledge in the diverse domains of emotion recognition. Naturally, the book contains a thorough and in-depth coverage on all theories and experiments on emotion recognition in a highly comprehensive manner.

The recognition process involves extraction of features from the external manifestation of emotion on facial images, voice, and biopotential signals. All the features extracted are not equally useful for emotion recognition. Thus, the next step to feature extraction is to reduce the dimension of features by feature reduction techniques. The last step of emotion recognition is to employ a classifier or clustering method to classify the measured signals into one specific emotion class. Several techniques of computational intelligence and machine learning can be used here for recognition of emotion from its feature space.

The book includes 20 contributory chapters. Each chapter starts with an abstract, followed by introduction, methodology, experiments and results, conclusions, and references. A biography and photograph of individual contributors are given at the end of each chapter to inspire and motivate young researchers to start his/her research career in this new discipline of knowledge through interaction with these researchers.

Chapter 1 serves as a prerequisite for the rest of the book. It examines emotion recognition as a pattern recognition problem and reviews the commonly used techniques of feature extraction, feature selection, and classification of emotions by different modalities, including facial expressions, voice, gesture, and posture. It also reviews the commonly used techniques for general pattern recognition, feature selection, and classification. Lastly, it compares the different techniques used in recognition of single and multimodal emotions.

In Chapter 2, Tong and Ji propose a systematic approach to model the dynamic properties of facial actions, including both temporal development of each action unit and dynamic dependencies among them in a spontaneous facial display. In particular, they employ a Dynamic Bayesian Network to explicitly model the dynamic and semantic relationship among the action units. The dynamic nature of the facial action is characterized by directed temporal links among the action units. They consider representing the semantic relationships by directed static links among action units. They employ domain knowledge and training data to automatically construct the Dynamic Bayesian Network (DBN) model. In this model, action units are recognized by generating probabilistic inference over time. Experiments with real images reveal that explicit modeling of the dynamic dependencies among action units demonstrates that the proposed method outperforms the existing techniques for action unit recognition for spontaneous facial displays.

Chapter 3 by Saha et al. provides a new forum for cross-cultural studies for facial expressions. A psychological study of facial expression for different cultural groups reveals that facial information representing a specific expression varies across cultures. Here, the authors demonstrate that the occurrence of action units proposed by Ekman possesses inter-culture variations. The rule base is generated for classification of six basic expressions using a decision tree. Experiments reveal that the performance of the classifier improves when the rule base becomes culture specific.

In Chapter 4, Chang and Huang propose a novel approach to design a subject-dependent facial expression recognition system. Facial expressions representing a particular emotion vary widely across the people. Naturally designing a general strategy to correctly recognize the emotion of people still remains an unsolved problem. Chang and Huang employ Radial Basis Function (RBF) Neural Network to classify seven emotions including neutral, happy, angry, surprised, sad, scared, and disgusted. Experimental results given to substantiate the classification methodology indicate that the proposed system can accurately identify emotions from facial expressions.

Zia Uddin and Kim in Chapter 5 present a new method to recognize facial expressions from time sequential facial images. They consider employing enhanced Independent Component Analysis to extract independent component features and use Fisher Linear Discriminant Analysis (FLDA) for classification of emotions.

Yang and Wang in Chapter 6 propose a new technique for feature selection in facial expression recognition problem using rough set theory. They consider designing a self-learning attribute reduction algorithm using rough sets and domain-oriented data mining theory. It is indicated that rough set methods outperform genetic algorithm in connection with feature selection problem. It is also found that geometrical features concerning mouth are found to have the highest importance in emotion recognition.

Halder et al. in Chapter 7 propose a novel scheme for facial expression recognition using type 2 fuzzy sets. Both interval and general type 2 fuzzy sets (IT2FS and GT2FS) are used independently to model fuzzy face spaces for different emotions. The most important research findings in their research include automated evaluation of secondary membership functions from the ensemble of primary membership functions obtained from different sources. The evaluated secondary memberships are used subsequently to transform a GT2FS into an equivalent IT2FS. The reasoning mechanism for classification used in IT2FS is extended by transforming GT2FS by IT2FS. Experiments undertaken reveal that GT2FS-based recognition outperforms the IT2FS with respect to classification accuracy at the cost of additional computational complexity.

Chapter 8 by Zheng et al. provides a survey on the recent advances on emotion recognition by non-frontal 3D and multi-view 2D facial image analysis. Feature extraction is the most pertinent issue in non-frontal facial image analysis for emotion recognition. Zheng et al.employ geometric features, appearance-based features, including scale invariant feature transform (SIFT) and local binary pattern feature (LBP) and Gabor wavelet features. LBP feature attempts to capture local image information and also proves its excellence in the fields of facial emotion descriptions. SIFT features are invariant to image translation, scaling, rotation and also partially invariant to illumination changes. SIFT features have earned popularity for their robustness in local geometric distortions. Gabor wavelet features are also proved to be effective for face and facial expression recognition system. The Gabor filter usually employs a Kernel function constructed by taking a product of Gaussian envelop with a harmonic oscillation function. The rest of the chapter provides a thorough discussion on 3D non-frontal face databases, including BU-3DFE and its dynamic version BU-4DFE along with Multi-PIE and Bosphorous databases. The chapter ends with a discussion on major issues to be considered for future researchers.

Cen et al. in Chapter 9 propose a method for speech emotion recognition by employing maximum a posteriori based fusion technique. The proposed method is capable of effectively combining the strengths of several classification techniques for recognition of emotional states in speech signals. To examine the effectiveness of the proposed...

Dateiformat: EPUB
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat EPUB ist sehr gut für Romane und Sachbücher geeignet - also für "fließenden" Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Bitte beachten Sie bei der Verwendung der Lese-Software Adobe Digital Editions: wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Download (sofort verfügbar)

108,99 €
inkl. 19% MwSt.
Download / Einzel-Lizenz
ePUB mit Adobe DRM
siehe Systemvoraussetzungen
E-Book bestellen