Open Access iconOpen Access

ARTICLE

crossmark

Improving Machine Translation Formality with Large Language Models

Murun Yang1,*, Fuxue Li2

1 School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China
2 College of Electrical Engineering, Yingkou Institute of Technology, Yingkou, 115014, China

* Corresponding Author: Murun Yang. Email: email

Computers, Materials & Continua 2025, 82(2), 2061-2075. https://doi.org/10.32604/cmc.2024.058248

Abstract

Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lack formality. In this paper, we propose how to improve NMT formality with large language models (LLMs), which combines the style transfer and evaluation capabilities of an LLM and the high-quality translation generation ability of NMT models to improve NMT formality. The proposed method (namely INMTF) encompasses two approaches. The first involves a revision approach using an LLM to revise the NMT-generated translation, ensuring a formal translation style. The second approach employs an LLM as a reward model for scoring translation formality, and then uses reinforcement learning algorithms to fine-tune the NMT model to maximize the reward score, thereby enhancing the formality of the generated translations. Considering the substantial parameter size of LLMs, we also explore methods to reduce the computational cost of INMTF. Experimental results demonstrate that INMTF significantly outperforms baselines in terms of translation formality and translation quality, with an improvement of +9.19 style accuracy points in the German-to-English task and +2.16 COMET score in the Russian-to-English task. Furthermore, our work demonstrates the potential of integrating LLMs within NMT frameworks to bridge the gap between NMT outputs and the formality required in various real-world translation scenarios.

Keywords

Neural machine translation; formality; large language model; text style transfer; style evaluation; reinforcement learning

Cite This Article

APA Style
Yang, M., Li, F. (2025). Improving Machine Translation Formality with Large Language Models. Computers, Materials & Continua, 82(2), 2061–2075. https://doi.org/10.32604/cmc.2024.058248
Vancouver Style
Yang M, Li F. Improving Machine Translation Formality with Large Language Models. Comput Mater Contin. 2025;82(2):2061–2075. https://doi.org/10.32604/cmc.2024.058248
IEEE Style
M. Yang and F. Li, “Improving Machine Translation Formality with Large Language Models,” Comput. Mater. Contin., vol. 82, no. 2, pp. 2061–2075, 2025. https://doi.org/10.32604/cmc.2024.058248



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 642

    View

  • 298

    Download

  • 0

    Like

Share Link