Submission Deadline: 31 July 2024 (closed) View: 526
Multimodal image segmentation and recognition is a significant and challenging research field. With the rapid development of information technology, multimodal target information is caught from different kinds of sensors, such as optical, infrared, and radar information. In this way, how to effectively fuse and utilize these multimodal data with different features and information has become a key issue.
Multimodal learning, as a powerful machine for data learning and fusion, has the ability to learn fused feature for complex data processing. In multimodal image processing, deep learning methods extract different features from multiple sensors; and then information fusion methods combine the features considering their contribution to target recognition. This can defend major challegences of classical methods, however, there are still many issues waiting solutions, such as the fusion strategy of multimodal data, data imbalance based cognitive distortion, small sample driven one/few-shot models, etc.
In this way, this issue is provided to focus on the methods and applications of multimodal learning in image processing, aiming to explore innovative methods and technologies to solve existing problems. Respected experts, scholars, and researchers are invited to share their latest research achievements and practical experience in this field, which can promote the development of multimodal image recognition, improve classification and recognition accuracy, and provide reliable solutions for practical applications.
We sincerely invite researchers from academia and industry to submit original research papers, review articles, and technical reports to jointly explore the methods and applications of multimodal mearning in image processing, solve existing problems, and promote further development in this field.