Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    ARTICLE

    DyLoRA-TAD: Dynamic Low-Rank Adapter for End-to-End Temporal Action Detection

    Jixin Wu1,2, Mingtao Zhou2,3, Di Wu2,3, Wenqi Ren4, Jiatian Mei2,3, Shu Zhang1,*

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.072964 - 12 January 2026

    Abstract End-to-end Temporal Action Detection (TAD) has achieved remarkable progress in recent years, driven by innovations in model architectures and the emergence of Video Foundation Models (VFMs). However, existing TAD methods that perform full fine-tuning of pretrained video models often incur substantial computational costs, which become particularly pronounced when processing long video sequences. Moreover, the need for precise temporal boundary annotations makes data labeling extremely expensive. In low-resource settings where annotated samples are scarce, direct fine-tuning tends to cause overfitting. To address these challenges, we introduce Dynamic Low-Rank Adapter (DyLoRA), a lightweight fine-tuning framework tailored specifically… More >

  • Open Access

    ARTICLE

    Optimizing Fine-Tuning in Quantized Language Models: An In-Depth Analysis of Key Variables

    Ao Shen1, Zhiquan Lai1,*, Dongsheng Li1,*, Xiaoyu Hu2

    CMC-Computers, Materials & Continua, Vol.82, No.1, pp. 307-325, 2025, DOI:10.32604/cmc.2024.057491 - 03 January 2025

    Abstract Large-scale Language Models (LLMs) have achieved significant breakthroughs in Natural Language Processing (NLP), driven by the pre-training and fine-tuning paradigm. While this approach allows models to specialize in specific tasks with reduced training costs, the substantial memory requirements during fine-tuning present a barrier to broader deployment. Parameter-Efficient Fine-Tuning (PEFT) techniques, such as Low-Rank Adaptation (LoRA), and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency. Among these, QLoRA, which combines PEFT and quantization, has demonstrated notable success in reducing memory footprints during fine-tuning, prompting the development… More >

Displaying 1-10 on page 1 of 2. Per Page