Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (10)
  • Open Access

    ARTICLE

    Explicitly Color-Inspired Neural Style Transfer Using Patchified AdaIN

    Bumsoo Kim1, Wonseop Shin2, Yonghoon Jung1, Youngsup Park3, Sanghyun Seo1,4,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.141, No.3, pp. 2143-2164, 2024, DOI:10.32604/cmes.2024.056079 - 31 October 2024

    Abstract Arbitrary style transfer aims to perceptually reflect the style of a reference image in artistic creations with visual aesthetics. Traditional style transfer models, particularly those using adaptive instance normalization (AdaIN) layer, rely on global statistics, which often fail to capture the spatially local color distribution, leading to outputs that lack variation despite geometric transformations. To address this, we introduce Patchified AdaIN, a color-inspired style transfer method that applies AdaIN to localized patches, utilizing local statistics to capture the spatial color distribution of the reference image. This approach enables enhanced color awareness in style transfer, adapting… More >

  • Open Access

    ARTICLE

    Constructive Robust Steganography Algorithm Based on Style Transfer

    Xiong Zhang1,2, Minqing Zhang1,2,3,*, Xu’an Wang1,2,3,*, Siyuan Huang1,2, Fuqiang Di1,2

    CMC-Computers, Materials & Continua, Vol.81, No.1, pp. 1433-1448, 2024, DOI:10.32604/cmc.2024.056742 - 15 October 2024

    Abstract Traditional information hiding techniques achieve information hiding by modifying carrier data, which can easily leave detectable traces that may be detected by steganalysis tools. Especially in image transmission, both geometric and non-geometric attacks can cause subtle changes in the pixels of the image during transmission. To overcome these challenges, we propose a constructive robust image steganography technique based on style transformation. Unlike traditional steganography, our algorithm does not involve any direct modifications to the carrier data. In this study, we constructed a mapping dictionary by setting the correspondence between binary codes and image categories and… More >

  • Open Access

    ARTICLE

    Robust Information Hiding Based on Neural Style Transfer with Artificial Intelligence

    Xiong Zhang1,2, Minqing Zhang1,2,3,*, Xu An Wang1,2,3, Wen Jiang1,2, Chao Jiang1,2, Pan Yang1,4

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 1925-1938, 2024, DOI:10.32604/cmc.2024.050899 - 15 May 2024

    Abstract This paper proposes an artificial intelligence-based robust information hiding algorithm to address the issue of confidential information being susceptible to noise attacks during transmission. The algorithm we designed aims to mitigate the impact of various noise attacks on the integrity of secret information during transmission. The method we propose involves encoding secret images into stylized encrypted images and applies adversarial transfer to both the style and content features of the original and embedded data. This process effectively enhances the concealment and imperceptibility of confidential information, thereby improving the security of such information during transmission and… More >

  • Open Access

    ARTICLE

    PP-GAN: Style Transfer from Korean Portraits to ID Photos Using Landmark Extractor with GAN

    Jongwook Si1, Sungyoung Kim2,*

    CMC-Computers, Materials & Continua, Vol.77, No.3, pp. 3119-3138, 2023, DOI:10.32604/cmc.2023.043797 - 26 December 2023

    Abstract The objective of style transfer is to maintain the content of an image while transferring the style of another image. However, conventional methods face challenges in preserving facial features, especially in Korean portraits where elements like the “Gat” (a traditional Korean hat) are prevalent. This paper proposes a deep learning network designed to perform style transfer that includes the “Gat” while preserving the identity of the face. Unlike traditional style transfer techniques, the proposed method aims to preserve the texture, attire, and the “Gat” in the style image by employing image sharpening and face landmark,… More >

  • Open Access

    ARTICLE

    ECGAN: Translate Real World to Cartoon Style Using Enhanced Cartoon Generative Adversarial Network

    Yixin Tang*

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 1195-1212, 2023, DOI:10.32604/cmc.2023.039182 - 08 June 2023

    Abstract Visual illustration transformation from real-world to cartoon images is one of the famous and challenging tasks in computer vision. Image-to-image translation from real-world to cartoon domains poses issues such as a lack of paired training samples, lack of good image translation, low feature extraction from the previous domain images, and lack of high-quality image translation from the traditional generator algorithms. To solve the above-mentioned issues, paired independent model, high-quality dataset, Bayesian-based feature extractor, and an improved generator must be proposed. In this study, we propose a high-quality dataset to reduce the effect of paired training… More >

  • Open Access

    ARTICLE

    APST-Flow: A Reversible Network-Based Artistic Painting Style Transfer Method

    Meng Wang*, Yixuan Shao, Haipeng Liu

    CMC-Computers, Materials & Continua, Vol.75, No.3, pp. 5229-5254, 2023, DOI:10.32604/cmc.2023.036631 - 29 April 2023

    Abstract In recent years, deep generative models have been successfully applied to perform artistic painting style transfer (APST). The difficulties might lie in the loss of reconstructing spatial details and the inefficiency of model convergence caused by the irreversible en-decoder methodology of the existing models. Aiming to this, this paper proposes a Flow-based architecture with both the en-decoder sharing a reversible network configuration. The proposed APST-Flow can efficiently reduce model uncertainty via a compact analysis-synthesis methodology, thereby the generalization performance and the convergence stability are improved. For the generator, a Flow-based network using Wavelet additive coupling… More >

  • Open Access

    ARTICLE

    Emotional Vietnamese Speech Synthesis Using Style-Transfer Learning

    Thanh X. Le, An T. Le, Quang H. Nguyen*

    Computer Systems Science and Engineering, Vol.44, No.2, pp. 1263-1278, 2023, DOI:10.32604/csse.2023.026234 - 15 June 2022

    Abstract In recent years, speech synthesis systems have allowed for the production of very high-quality voices. Therefore, research in this domain is now turning to the problem of integrating emotions into speech. However, the method of constructing a speech synthesizer for each emotion has some limitations. First, this method often requires an emotional-speech data set with many sentences. Such data sets are very time-intensive and labor-intensive to complete. Second, training each of these models requires computers with large computational capabilities and a lot of effort and time for model tuning. In addition, each model for each… More >

  • Open Access

    ARTICLE

    Enhancing the Robustness of Visual Object Tracking via Style Transfer

    Abdollah Amirkhani1,*, Amir Hossein Barshooi1, Amir Ebrahimi2

    CMC-Computers, Materials & Continua, Vol.70, No.1, pp. 981-997, 2022, DOI:10.32604/cmc.2022.019001 - 07 September 2021

    Abstract The performance and accuracy of computer vision systems are affected by noise in different forms. Although numerous solutions and algorithms have been presented for dealing with every type of noise, a comprehensive technique that can cover all the diverse noises and mitigate their damaging effects on the performance and precision of various systems is still missing. In this paper, we have focused on the stability and robustness of one computer vision branch (i.e., visual object tracking). We have demonstrated that, without imposing a heavy computational load on a model or changing its algorithms, the drop in… More >

  • Open Access

    ARTICLE

    Image-to-Image Style Transfer Based on the Ghost Module

    Yan Jiang1, Xinrui Jia1, Liguo Zhang1,2,*, Ye Yuan1, Lei Chen3, Guisheng Yin1

    CMC-Computers, Materials & Continua, Vol.68, No.3, pp. 4051-4067, 2021, DOI:10.32604/cmc.2021.016481 - 06 May 2021

    Abstract The technology for image-to-image style transfer (a prevalent image processing task) has developed rapidly. The purpose of style transfer is to extract a texture from the source image domain and transfer it to the target image domain using a deep neural network. However, the existing methods typically have a large computational cost. To achieve efficient style transfer, we introduce a novel Ghost module into the GANILLA architecture to produce more feature maps from cheap operations. Then we utilize an attention mechanism to transform images with various styles. We optimize the original generative adversarial network (GAN) More >

  • Open Access

    ARTICLE

    Data Augmentation Technology Driven By Image Style Transfer in Self-Driving Car Based on End-to-End Learning

    Dongjie Liu1, Jin Zhao1, *, Axin Xi2, Chao Wang1, Xinnian Huang1, Kuncheng Lai1, Chang Liu1

    CMES-Computer Modeling in Engineering & Sciences, Vol.122, No.2, pp. 593-617, 2020, DOI:10.32604/cmes.2020.08641 - 09 February 2020

    Abstract With the advent of deep learning, self-driving schemes based on deep learning are becoming more and more popular. Robust perception-action models should learn from data with different scenarios and real behaviors, while current end-to-end model learning is generally limited to training of massive data, innovation of deep network architecture, and learning in-situ model in a simulation environment. Therefore, we introduce a new image style transfer method into data augmentation, and improve the diversity of limited data by changing the texture, contrast ratio and color of the image, and then it is extended to the scenarios… More >

Displaying 1-10 on page 1 of 10. Per Page