Special Issues
Table of Content

Recent Advances in Virtual Reality

Submission Deadline: 30 November 2022 (closed) View: 135

Guest Editors

Prof. Zhigeng Pan, Nanjing University of Information Science & Technology, China
Dr. Gustavo Marfia, University of Bologna, Italy
Prof. Zhihan Lv, Qingdao University, China

Summary

Interactive technologies, such as eye tracking, speech recognition, and gesture input, have made great progress with the deepening of research. Users can use these interactive technologies to obtain information from virtual reality systems through multi-channels to eliminate the boundaries between the human environment and computer systems. Currently, various sensory organs of humans participate in the process of interaction between humans and computer systems, which can be called multimodal interaction from the perspective of system and technology. Compared with single-channel interaction, multimodal interaction has more extensive application potential in the human-computer natural interaction system. In the study of multimodal interaction, user behavior is one of the main input methods. The effective natural interaction can be achieved through classifying user behavior which conveys user intention. Besides, there is a corresponding mathematical mapping between user cognition and user behavior. It is necessary to diversify into various interactive devices for the sensory interactive system to enhance users’ interactive experience. At present, the interactive devices are mainly divided into voice input and output devices, image input and image display devices, the TouchPad, input devices, and visual tracking devices according to different sensory attributes. These interactive devices are applied to various computer systems to improve the computer efficiency and the accuracy of human-computer interaction. Through reasonable application in augmented reality systems, they will greatly improve the interactivity and user experience of augmented reality. Relevant researches have proved that the multimodal interaction system has achieved good application effects in the fields of digital media, cultural communication, commodity sales, information communication and so on.


Although virtual reality technology has been greatly developed and promoted, there are still some limitations in the study of multimodal interaction. Limited by technology, the system cannot directly identify user cognition. Meanwhile, the imperfect design of virtual reality system will affect the accuracy and efficiency of information transmission, thereby increasing cognitive load and reducing user experience. The existing research of multimodal interaction is mainly in the two fields of software design and hardware equipment with the focus on the application of multimodal system. For current augmented reality systems and applications, most studies still focus on visual and gesture-based augmented reality, which lacks direct physical and sensory stimulation for users. Therefore, it is expected to explore more interactive channels for users to input information which is processed by the system into different forms of information feedback to the sensory organs of users participating in the interaction corresponding to the information form. Research in the future also should pay attention to how to combine the architecture of multimodal interaction with augmented reality.


Some research topics of multimodal interaction have evoked public interest, including user cognition, user recognition, and behavior coding. The research topic of this special issue aims to provide readers with a comprehensive overview of multimodal interaction research and work in virtual reality. This special issue particularly expects original comment articles, opinions, methods and modeling research. The excellent papers of ICVR2022 (2022 IEEE 8th International Conference on Virtual Reality) will be considered for inclusion in the Special Issue. All submitted papers will undergo the Journal's standard peer-review process.


The areas covered by this special issue may include but are not limited to the following:

 

• Construction and Optimization of the Multimodal Interaction 

• Multimodal Interaction Realization for Virtual Reality

• Analysis of Application Scenario of Multimodal Interaction

• Comparative analysis of Multimodal Interaction and Single-channel Interaction

• Analysis of Application Values of Multimodal Interaction

• Cognitive Expression under Multichannel Integration

• Quantitative Description of User Cognition in Interactive Systems

• Mathematical logic Relationship between Multimodal Natural Interrelation and User Cognition

• Data Fusion of User Behavior in Multimodal Natural Interaction

• Research on Cognition of Multimodal Interaction

• Relationship between Cognitive Load and User Behavior in Interactive Systems

• Evaluation and Analysis of User Multisensory Cognitive Channels

• Analysis of Weighting of Multisensory Channels

• Research on Multimodal Interaction Based on User Cognition



Published Papers


  • Open Access

    ARTICLE

    Activation Redistribution Based Hybrid Asymmetric Quantization Method of Neural Networks

    Lu Wei, Zhong Ma, Chaojie Yang
    CMES-Computer Modeling in Engineering & Sciences, Vol.138, No.1, pp. 981-1000, 2024, DOI:10.32604/cmes.2023.027085
    (This article belongs to the Special Issue: Recent Advances in Virtual Reality)
    Abstract The demand for adopting neural networks in resource-constrained embedded devices is continuously increasing. Quantization is one of the most promising solutions to reduce computational cost and memory storage on embedded devices. In order to reduce the complexity and overhead of deploying neural networks on Integer-only hardware, most current quantization methods use a symmetric quantization mapping strategy to quantize a floating-point neural network into an integer network. However, although symmetric quantization has the advantage of easier implementation, it is sub-optimal for cases where the range could be skewed and not symmetric. This often comes at the… More >

    Graphic Abstract

    Activation Redistribution Based Hybrid Asymmetric Quantization Method of Neural Networks

  • Open Access

    ARTICLE

    SA-Model: Multi-Feature Fusion Poetic Sentiment Analysis Based on a Hybrid Word Vector Model

    Lingli Zhang, Yadong Wu, Qikai Chu, Pan Li, Guijuan Wang, Weihan Zhang, Yu Qiu, Yi Li
    CMES-Computer Modeling in Engineering & Sciences, Vol.137, No.1, pp. 631-645, 2023, DOI:10.32604/cmes.2023.027179
    (This article belongs to the Special Issue: Recent Advances in Virtual Reality)
    Abstract Sentiment analysis in Chinese classical poetry has become a prominent topic in historical and cultural tracing, ancient literature research, etc. However, the existing research on sentiment analysis is relatively small. It does not effectively solve the problems such as the weak feature extraction ability of poetry text, which leads to the low performance of the model on sentiment analysis for Chinese classical poetry. In this research, we offer the SA-Model, a poetic sentiment analysis model. SA-Model firstly extracts text vector information and fuses it through Bidirectional encoder representation from transformers-Whole word masking-extension (BERT-wwm-ext) and Enhanced More >

  • Open Access

    ARTICLE

    3D Human Pose Estimation Using Two-Stream Architecture with Joint Training

    Jian Kang, Wanshu Fan, Yijing Li, Rui Liu, Dongsheng Zhou
    CMES-Computer Modeling in Engineering & Sciences, Vol.137, No.1, pp. 607-629, 2023, DOI:10.32604/cmes.2023.024420
    (This article belongs to the Special Issue: Recent Advances in Virtual Reality)
    Abstract With the advancement of image sensing technology, estimating 3D human pose from monocular video has become a hot research topic in computer vision. 3D human pose estimation is an essential prerequisite for subsequent action analysis and understanding. It empowers a wide spectrum of potential applications in various areas, such as intelligent transportation, human-computer interaction, and medical rehabilitation. Currently, some methods for 3D human pose estimation in monocular video employ temporal convolutional network (TCN) to extract inter-frame feature relationships, but the majority of them suffer from insufficient inter-frame feature relationship extractions. In this paper, we decompose… More >

  • Open Access

    ARTICLE

    UOUU: User-Object Distance and User-User Distance Combined Method for Collaboration Task

    Xiangdong Li, Pengfei Wang, Hanfei Xia, Yuting Niu
    CMES-Computer Modeling in Engineering & Sciences, Vol.136, No.3, pp. 3213-3238, 2023, DOI:10.32604/cmes.2023.023895
    (This article belongs to the Special Issue: Recent Advances in Virtual Reality)
    Abstract Augmented reality superimposes digital information onto objects in the physical world and enables multi-user collaboration. Despite that previous proxemic interaction research has explored many applications of user-object distance and user-user distance in an augmented reality context, respectively, and combining both types of distance can improve the efficiency of users’ perception and interaction with task objects and collaborators by providing users with insight into spatial relations of user-task object and user-user, less is concerned about how the two types of distances can be simultaneously adopted to assist collaboration tasks across multi-users. To fulfill the gap, we… More >

    Graphic Abstract

    UOUU: User-Object Distance and User-User Distance Combined Method for Collaboration Task

  • Open Access

    ARTICLE

    The Flipping-Free Full-Parallax Tabletop Integral Imaging with Enhanced Viewing Angle Based on Space-Multiplexed Voxel Screen and Compound Lens Array

    Peiren Wang, Jinqiang Bi, Zilong Li, Xue Han, Zhengyang Li, Xiaozheng Wang
    CMES-Computer Modeling in Engineering & Sciences, Vol.136, No.3, pp. 3197-3211, 2023, DOI:10.32604/cmes.2023.024305
    (This article belongs to the Special Issue: Recent Advances in Virtual Reality)
    Abstract Tabletop integral imaging display with a more realistic and immersive experience has always been a hot spot in three-dimensional imaging technology, widely used in biomedical imaging and visualization to enhance medical diagnosis. However, the traditional structural characteristics of integral imaging display inevitably introduce the flipping effect outside the effective viewing angle. Here, a full-parallax tabletop integral imaging display without the flipping effect based on space-multiplexed voxel screen and compound lens array is demonstrated, and two holographic functional screens with different parameters are optically designed and fabricated. To eliminate the flipping effect in the reconstruction process,… More >

    Graphic Abstract

    The Flipping-Free Full-Parallax Tabletop Integral Imaging with Enhanced Viewing Angle Based on Space-Multiplexed Voxel Screen and Compound Lens Array

  • Open Access

    ARTICLE

    ER-Net: Efficient Recalibration Network for Multi-View Multi-Person 3D Pose Estimation

    Mi Zhou, Rui Liu, Pengfei Yi, Dongsheng Zhou
    CMES-Computer Modeling in Engineering & Sciences, Vol.136, No.2, pp. 2093-2109, 2023, DOI:10.32604/cmes.2023.024189
    (This article belongs to the Special Issue: Recent Advances in Virtual Reality)
    Abstract Multi-view multi-person 3D human pose estimation is a hot topic in the field of human pose estimation due to its wide range of application scenarios. With the introduction of end-to-end direct regression methods, the field has entered a new stage of development. However, the regression results of joints that are more heavily influenced by external factors are not accurate enough even for the optimal method. In this paper, we propose an effective feature recalibration module based on the channel attention mechanism and a relative optimal calibration strategy, which is applied to the multi-view multi-person 3D More >

    Graphic Abstract

    ER-Net: Efficient Recalibration Network for Multi-View Multi-Person 3D Pose Estimation

  • Open Access

    ARTICLE

    Aggregate Point Cloud Geometric Features for Processing

    Yinghao Li, Renbo Xia, Jibin Zhao, Yueling Chen, Liming Tao, Hangbo Zou, Tao Zhang
    CMES-Computer Modeling in Engineering & Sciences, Vol.136, No.1, pp. 555-571, 2023, DOI:10.32604/cmes.2023.024470
    (This article belongs to the Special Issue: Recent Advances in Virtual Reality)
    Abstract As 3D acquisition technology develops and 3D sensors become increasingly affordable, large quantities of 3D point cloud data are emerging. How to effectively learn and extract the geometric features from these point clouds has become an urgent problem to be solved. The point cloud geometric information is hidden in disordered, unstructured points, making point cloud analysis a very challenging problem. To address this problem, we propose a novel network framework, called Tree Graph Network (TGNet), which can sample, group, and aggregate local geometric features. Specifically, we construct a Tree Graph by explicit rules, which consists More >

  • Open Access

    ARTICLE

    Monocular Depth Estimation with Sharp Boundary

    Xin Yang, Qingling Chang, Shiting Xu, Xinlin Liu, Yan Cui
    CMES-Computer Modeling in Engineering & Sciences, Vol.136, No.1, pp. 573-592, 2023, DOI:10.32604/cmes.2023.023424
    (This article belongs to the Special Issue: Recent Advances in Virtual Reality)
    Abstract Monocular depth estimation is the basic task in computer vision. Its accuracy has tremendous improvement in the decade with the development of deep learning. However, the blurry boundary in the depth map is a serious problem. Researchers find that the blurry boundary is mainly caused by two factors. First, the low-level features, containing boundary and structure information, may be lost in deep networks during the convolution process. Second, the model ignores the errors introduced by the boundary area due to the few portions of the boundary area in the whole area, during the backpropagation. Focusing More >

Share Link