Open Access iconOpen Access

REVIEW

crossmark

A Critical Review of Methods and Challenges in Large Language Models

Milad Moradi1,*, Ke Yan2, David Colwell2, Matthias Samwald3, Rhona Asgari1

1 AI Research, Tricentis, Vienna, 1220, Austria
2 AI Research, Tricentis, Sydney, NSW 2010, Australia
3 Institute of Artificial Intelligence, Center for Medical Statistics, Informatics, and Intelligent Systems, Medical University of Vienna, Vienna, 1090, Austria

* Corresponding Author: Milad Moradi. Email: email

(This article belongs to the Special Issue: Artificial Intelligence Current Perspectives and Alternative Paths: From eXplainable AI to Generative AI and Data Visualization Technologies)

Computers, Materials & Continua 2025, 82(2), 1681-1698. https://doi.org/10.32604/cmc.2025.061263

Abstract

This critical review provides an in-depth analysis of Large Language Models (LLMs), encompassing their foundational principles, diverse applications, and advanced training methodologies. We critically examine the evolution from Recurrent Neural Networks (RNNs) to Transformer models, highlighting the significant advancements and innovations in LLM architectures. The review explores state-of-the-art techniques such as in-context learning and various fine-tuning approaches, with an emphasis on optimizing parameter efficiency. We also discuss methods for aligning LLMs with human preferences, including reinforcement learning frameworks and human feedback mechanisms. The emerging technique of retrieval-augmented generation, which integrates external knowledge into LLMs, is also evaluated. Additionally, we address the ethical considerations of deploying LLMs, stressing the importance of responsible and mindful application. By identifying current gaps and suggesting future research directions, this review provides a comprehensive and critical overview of the present state and potential advancements in LLMs. This work serves as an insightful guide for researchers and practitioners in artificial intelligence, offering a unified perspective on the strengths, limitations, and future prospects of LLMs.

Keywords

Large language models; artificial intelligence; natural language processing; machine learning; generative artificial intelligence

Cite This Article

APA Style
Moradi, M., Yan, K., Colwell, D., Samwald, M., Asgari, R. (2025). A critical review of methods and challenges in large language models. Computers, Materials & Continua, 82(2), 1681–1698. https://doi.org/10.32604/cmc.2025.061263
Vancouver Style
Moradi M, Yan K, Colwell D, Samwald M, Asgari R. A critical review of methods and challenges in large language models. Comput Mater Contin. 2025;82(2):1681–1698. https://doi.org/10.32604/cmc.2025.061263
IEEE Style
M. Moradi, K. Yan, D. Colwell, M. Samwald, and R. Asgari, “A Critical Review of Methods and Challenges in Large Language Models,” Comput. Mater. Contin., vol. 82, no. 2, pp. 1681–1698, 2025. https://doi.org/10.32604/cmc.2025.061263



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 724

    View

  • 365

    Download

  • 0

    Like

Share Link