Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    ARTICLE

    UniTrans: Unified Parameter-Efficient Transfer Learning and Multimodal Alignment for Large Multimodal Foundation Model

    Jiakang Sun1,2, Ke Chen1,2, Xinyang He1,2, Xu Liu1,2, Ke Li1,2, Cheng Peng1,2,*

    CMC-Computers, Materials & Continua, Vol.83, No.1, pp. 219-238, 2025, DOI:10.32604/cmc.2025.059745 - 26 March 2025

    Abstract With the advancements in parameter-efficient transfer learning techniques, it has become feasible to leverage large pre-trained language models for downstream tasks under low-cost and low-resource conditions. However, applying this technique to multimodal knowledge transfer introduces a significant challenge: ensuring alignment across modalities while minimizing the number of additional parameters required for downstream task adaptation. This paper introduces UniTrans, a framework aimed at facilitating efficient knowledge transfer across multiple modalities. UniTrans leverages Vector-based Cross-modal Random Matrix Adaptation to enable fine-tuning with minimal parameter overhead. To further enhance modality alignment, we introduce two key components: the Multimodal More >

  • Open Access

    ARTICLE

    Abnormal Action Detection Based on Parameter-Efficient Transfer Learning in Laboratory Scenarios

    Changyu Liu1, Hao Huang1, Guogang Huang2,*, Chunyin Wu1, Yingqi Liang3

    CMC-Computers, Materials & Continua, Vol.80, No.3, pp. 4219-4242, 2024, DOI:10.32604/cmc.2024.053625 - 12 September 2024

    Abstract Laboratory safety is a critical area of broad societal concern, particularly in the detection of abnormal actions. To enhance the efficiency and accuracy of detecting such actions, this paper introduces a novel method called TubeRAPT (Tubelet Transformer based on Adapter and Prefix Training Module). This method primarily comprises three key components: the TubeR network, an adaptive clustering attention mechanism, and a prefix training module. These components work in synergy to address the challenge of knowledge preservation in models pre-trained on large datasets while maintaining training efficiency. The TubeR network serves as the backbone for spatio-temporal… More >

Displaying 1-10 on page 1 of 2. Per Page