Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access

    ARTICLE

    Faster Region Convolutional Neural Network (FRCNN) Based Facial Emotion Recognition

    J. Sheril Angel1, A. Diana Andrushia1,*, T. Mary Neebha1, Oussama Accouche2, Louai Saker2, N. Anand3

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2427-2448, 2024, DOI:10.32604/cmc.2024.047326

    Abstract Facial emotion recognition (FER) has become a focal point of research due to its widespread applications, ranging from human-computer interaction to affective computing. While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets, recent strides in artificial intelligence and deep learning (DL) have ushered in more sophisticated approaches. The research aims to develop a FER system using a Faster Region Convolutional Neural Network (FRCNN) and design a specialized FRCNN architecture tailored for facial emotion recognition, leveraging its ability to capture spatial hierarchies within localized regions of facial… More >

  • Open Access

    ARTICLE

    Deep Facial Emotion Recognition Using Local Features Based on Facial Landmarks for Security System

    Youngeun An, Jimin Lee, EunSang Bak*, Sungbum Pan*

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 1817-1832, 2023, DOI:10.32604/cmc.2023.039460

    Abstract Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces. Most conventional methods for emotion recognition using facial expressions use the entire facial image to extract features and then recognize specific emotions through a pre-trained model. In contrast, this paper proposes a novel feature vector extraction method using the Euclidean distance between the landmarks changing their positions according to facial expressions, especially around the eyes, eyebrows, nose, and mouth. Then, we apply a new classifier using an ensemble network to increase emotion recognition accuracy. The emotion recognition performance was More >

  • Open Access

    ARTICLE

    Facial Emotion Recognition Using Swarm Optimized Multi-Dimensional DeepNets with Losses Calculated by Cross Entropy Function

    A. N. Arun1,*, P. Maheswaravenkatesh2, T. Jayasankar2

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 3285-3301, 2023, DOI:10.32604/csse.2023.035356

    Abstract The human face forms a canvas wherein various non-verbal expressions are communicated. These expressional cues and verbal communication represent the accurate perception of the actual intent. In many cases, a person may present an outward expression that might differ from the genuine emotion or the feeling that the person experiences. Even when people try to hide these emotions, the real emotions that are internally felt might reflect as facial expressions in the form of micro expressions. These micro expressions cannot be masked and reflect the actual emotional state of a person under study. Such micro… More >

  • Open Access

    ARTICLE

    Empathic Responses of Behavioral-Synchronization in Human-Agent Interaction

    Sung Park1,*, Seongeon Park2, Mincheol Whang2

    CMC-Computers, Materials & Continua, Vol.71, No.2, pp. 3761-3784, 2022, DOI:10.32604/cmc.2022.023738

    Abstract Artificial entities, such as virtual agents, have become more pervasive. Their long-term presence among humans requires the virtual agent's ability to express appropriate emotions to elicit the necessary empathy from the users. Affective empathy involves behavioral mimicry, a synchronized co-movement between dyadic pairs. However, the characteristics of such synchrony between humans and virtual agents remain unclear in empathic interactions. Our study evaluates the participant's behavioral synchronization when a virtual agent exhibits an emotional expression congruent with the emotional context through facial expressions, behavioral gestures, and voice. Participants viewed an emotion-eliciting video stimulus (negative or positive)… More >

Displaying 1-10 on page 1 of 4. Per Page