Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (10)
  • Open Access

    ARTICLE

    LKMT: Linguistics Knowledge-Driven Multi-Task Neural Machine Translation for Urdu and English

    Muhammad Naeem Ul Hassan1,2, Zhengtao Yu1,2,*, Jian Wang1,2, Ying Li1,2, Shengxiang Gao1,2, Shuwan Yang1,2, Cunli Mao1,2

    CMC-Computers, Materials & Continua, Vol.81, No.1, pp. 951-969, 2024, DOI:10.32604/cmc.2024.054673 - 15 October 2024

    Abstract Thanks to the strong representation capability of pre-trained language models, supervised machine translation models have achieved outstanding performance. However, the performances of these models drop sharply when the scale of the parallel training corpus is limited. Considering the pre-trained language model has a strong ability for monolingual representation, it is the key challenge for machine translation to construct the in-depth relationship between the source and target language by injecting the lexical and syntactic information into pre-trained language models. To alleviate the dependence on the parallel corpus, we propose a Linguistics Knowledge-Driven Multi-Task (LKMT) approach to… More >

  • Open Access

    ARTICLE

    Improving Low-Resource Machine Translation Using Reinforcement Learning from Human Feedback

    Liqing Wang*, Yiheng Xiao

    Intelligent Automation & Soft Computing, Vol.39, No.4, pp. 619-631, 2024, DOI:10.32604/iasc.2024.052971 - 06 September 2024

    Abstract Neural Machine Translation is one of the key research directions in Natural Language Processing. However, limited by the scale and quality of parallel corpus, the translation quality of low-resource Neural Machine Translation has always been unsatisfactory. When Reinforcement Learning from Human Feedback (RLHF) is applied to low-resource machine translation, commonly encountered issues of substandard preference data quality and the higher cost associated with manual feedback data. Therefore, a more cost-effective method for obtaining feedback data is proposed. At first, optimizing the quality of preference data through the prompt engineering of the Large Language Model (LLM), More >

  • Open Access

    ARTICLE

    Neural Machine Translation Models with Attention-Based Dropout Layer

    Huma Israr1,*, Safdar Abbas Khan1, Muhammad Ali Tahir1, Muhammad Khuram Shahzad1, Muneer Ahmad1, Jasni Mohamad Zain2,*

    CMC-Computers, Materials & Continua, Vol.75, No.2, pp. 2981-3009, 2023, DOI:10.32604/cmc.2023.035814 - 31 March 2023

    Abstract In bilingual translation, attention-based Neural Machine Translation (NMT) models are used to achieve synchrony between input and output sequences and the notion of alignment. NMT model has obtained state-of-the-art performance for several language pairs. However, there has been little work exploring useful architectures for Urdu-to-English machine translation. We conducted extensive Urdu-to-English translation experiments using Long short-term memory (LSTM)/Bidirectional recurrent neural networks (Bi-RNN)/Statistical recurrent unit (SRU)/Gated recurrent unit (GRU)/Convolutional neural network (CNN) and Transformer. Experimental results show that Bi-RNN and LSTM with attention mechanism trained iteratively, with a scalable data set, make precise predictions on unseen… More >

  • Open Access

    ARTICLE

    Text Simplification Using Transformer and BERT

    Sarah Alissa1,*, Mike Wald2

    CMC-Computers, Materials & Continua, Vol.75, No.2, pp. 3479-3495, 2023, DOI:10.32604/cmc.2023.033647 - 31 March 2023

    Abstract Reading and writing are the main interaction methods with web content. Text simplification tools are helpful for people with cognitive impairments, new language learners, and children as they might find difficulties in understanding the complex web content. Text simplification is the process of changing complex text into more readable and understandable text. The recent approaches to text simplification adopted the machine translation concept to learn simplification rules from a parallel corpus of complex and simple sentences. In this paper, we propose two models based on the transformer which is an encoder-decoder structure that achieves state-of-the-art… More >

  • Open Access

    ARTICLE

    Neural Machine Translation by Fusing Key Information of Text

    Shijie Hu1, Xiaoyu Li1,*, Jiayu Bai1, Hang Lei1, Weizhong Qian1, Sunqiang Hu1, Cong Zhang2, Akpatsa Samuel Kofi1, Qian Qiu2,3, Yong Zhou4, Shan Yang5

    CMC-Computers, Materials & Continua, Vol.74, No.2, pp. 2803-2815, 2023, DOI:10.32604/cmc.2023.032732 - 31 October 2022

    Abstract When the Transformer proposed by Google in 2017, it was first used for machine translation tasks and achieved the state of the art at that time. Although the current neural machine translation model can generate high quality translation results, there are still mistranslations and omissions in the translation of key information of long sentences. On the other hand, the most important part in traditional translation tasks is the translation of key information. In the translation results, as long as the key information is translated accurately and completely, even if other parts of the results are… More >

  • Open Access

    ARTICLE

    DLBT: Deep Learning-Based Transformer to Generate Pseudo-Code from Source Code

    Walaa Gad1,*, Anas Alokla1, Waleed Nazih2, Mustafa Aref1, Abdel-badeeh Salem1

    CMC-Computers, Materials & Continua, Vol.70, No.2, pp. 3117-3132, 2022, DOI:10.32604/cmc.2022.019884 - 27 September 2021

    Abstract Understanding the content of the source code and its regular expression is very difficult when they are written in an unfamiliar language. Pseudo-code explains and describes the content of the code without using syntax or programming language technologies. However, writing Pseudo-code to each code instruction is laborious. Recently, neural machine translation is used to generate textual descriptions for the source code. In this paper, a novel deep learning-based transformer (DLBT) model is proposed for automatic Pseudo-code generation from the source code. The proposed model uses deep learning which is based on Neural Machine Translation (NMT)… More >

  • Open Access

    ARTICLE

    A Novel Beam Search to Improve Neural Machine Translation for English-Chinese

    Xinyue Lin1, Jin Liu1, *, Jianming Zhang2, Se-Jung Lim3

    CMC-Computers, Materials & Continua, Vol.65, No.1, pp. 387-404, 2020, DOI:10.32604/cmc.2020.010984 - 23 July 2020

    Abstract Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, overcoming the weaknesses of conventional phrase-based translation systems. Although NMT based systems have gained their popularity in commercial translation applications, there is still plenty of room for improvement. Being the most popular search algorithm in NMT, beam search is vital to the translation result. However, traditional beam search can produce duplicate or missing translation due to its target sequence selection strategy. Aiming to alleviate this problem, this paper proposed neural machine translation improvements based on a novel beam search evaluation function. And we More >

  • Open Access

    ARTICLE

    Improve Neural Machine Translation by Building Word Vector with Part of Speech

    Jinyingming Zhang1 , Jin Liu1, *, Xinyue Lin1

    Journal on Artificial Intelligence, Vol.2, No.2, pp. 79-88, 2020, DOI:10.32604/jai.2020.010476 - 15 July 2020

    Abstract Neural Machine Translation (NMT) based system is an important technology for translation applications. However, there is plenty of rooms for the improvement of NMT. In the process of NMT, traditional word vector cannot distinguish the same words under different parts of speech (POS). Aiming to alleviate this problem, this paper proposed a new word vector training method based on POS feature. It can efficiently improve the quality of translation by adding POS feature to the training process of word vectors. In the experiments, we conducted extensive experiments to evaluate our methods. The experimental result shows More >

  • Open Access

    ARTICLE

    Corpus Augmentation for Improving Neural Machine Translation

    Zijian Li1, Chengying Chi1, *, Yunyun Zhan2, *

    CMC-Computers, Materials & Continua, Vol.64, No.1, pp. 637-650, 2020, DOI:10.32604/cmc.2020.010265 - 20 May 2020

    Abstract The translation quality of neural machine translation (NMT) systems depends largely on the quality of large-scale bilingual parallel corpora available. Research shows that under the condition of limited resources, the performance of NMT is greatly reduced, and a large amount of high-quality bilingual parallel data is needed to train a competitive translation model. However, not all languages have large-scale and high-quality bilingual corpus resources available. In these cases, improving the quality of the corpora has become the main focus to increase the accuracy of the NMT results. This paper proposes a new method to improve… More >

  • Open Access

    ARTICLE

    Dependency-Based Local Attention Approach to Neural Machine Translation

    Jing Qiu1, Yan Liu2, Yuhan Chai2, Yaqi Si2, Shen Su1, ∗, Le Wang1, ∗, Yue Wu3

    CMC-Computers, Materials & Continua, Vol.59, No.2, pp. 547-562, 2019, DOI:10.32604/cmc.2019.05892

    Abstract Recently dependency information has been used in different ways to improve neural machine translation. For example, add dependency labels to the hidden states of source words. Or the contiguous information of a source word would be found according to the dependency tree and then be learned independently and be added into Neural Machine Translation (NMT) model as a unit in various ways. However, these works are all limited to the use of dependency information to enrich the hidden states of source words. Since many works in Statistical Machine Translation (SMT) and NMT have proven the… More >

Displaying 1-10 on page 1 of 10. Per Page