Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access

    ARTICLE

    DAUNet: Detail-Aware U-Shaped Network for 2D Human Pose Estimation

    Xi Li1,2, Yuxin Li2, Zhenhua Xiao3,*, Zhenghua Huang1, Lianying Zou1

    CMC-Computers, Materials & Continua, Vol.81, No.2, pp. 3325-3349, 2024, DOI:10.32604/cmc.2024.056464 - 18 November 2024

    Abstract Human pose estimation is a critical research area in the field of computer vision, playing a significant role in applications such as human-computer interaction, behavior analysis, and action recognition. In this paper, we propose a U-shaped keypoint detection network (DAUNet) based on an improved ResNet subsampling structure and spatial grouping mechanism. This network addresses key challenges in traditional methods, such as information loss, large network redundancy, and insufficient sensitivity to low-resolution features. DAUNet is composed of three main components. First, we introduce an improved BottleNeck block that employs partial convolution and strip pooling to reduce… More >

  • Open Access

    ARTICLE

    Lightweight Multi-Resolution Network for Human Pose Estimation

    Pengxin Li1, Rong Wang1,2,*, Wenjing Zhang1, Yinuo Liu1, Chenyue Xu1

    CMES-Computer Modeling in Engineering & Sciences, Vol.138, No.3, pp. 2239-2255, 2024, DOI:10.32604/cmes.2023.030677 - 15 December 2023

    Abstract Human pose estimation aims to localize the body joints from image or video data. With the development of deep learning, pose estimation has become a hot research topic in the field of computer vision. In recent years, human pose estimation has achieved great success in multiple fields such as animation and sports. However, to obtain accurate positioning results, existing methods may suffer from large model sizes, a high number of parameters, and increased complexity, leading to high computing costs. In this paper, we propose a new lightweight feature encoder to construct a high-resolution network that… More >

  • Open Access

    ARTICLE

    Multi-Level Feature Aggregation-Based Joint Keypoint Detection and Description

    Jun Li1, Xiang Li1, Yifei Wei1,*, Mei Song1, Xiaojun Wang2

    CMC-Computers, Materials & Continua, Vol.73, No.2, pp. 2529-2540, 2022, DOI:10.32604/cmc.2022.029542 - 16 June 2022

    Abstract Image keypoint detection and description is a popular method to find pixel-level connections between images, which is a basic and critical step in many computer vision tasks. The existing methods are far from optimal in terms of keypoint positioning accuracy and generation of robust and discriminative descriptors. This paper proposes a new end-to-end self-supervised training deep learning network. The network uses a backbone feature encoder to extract multi-level feature maps, then performs joint image keypoint detection and description in a forward pass. On the one hand, in order to enhance the localization accuracy of keypoints More >

  • Open Access

    ARTICLE

    Keypoint Description Using Statistical Descriptor with Similarity-Invariant Regions

    Ibrahim El rube'*, Sameer Alsharif

    Computer Systems Science and Engineering, Vol.42, No.1, pp. 407-421, 2022, DOI:10.32604/csse.2022.022400 - 02 December 2021

    Abstract This article presents a method for the description of key points using simple statistics for regions controlled by neighboring key points to remedy the gap in existing descriptors. Usually, the existent descriptors such as speeded up robust features (SURF), Kaze, binary robust invariant scalable keypoints (BRISK), features from accelerated segment test (FAST), and oriented FAST and rotated BRIEF (ORB) can competently detect, describe, and match images in the presence of some artifacts such as blur, compression, and illumination. However, the performance and reliability of these descriptors decrease for some imaging variations such as point of… More >

Displaying 1-10 on page 1 of 4. Per Page