Open Access iconOpen Access

ARTICLE

Deterministic Convergence Analysis for GRU Networks via Smoothing Regularization

Qian Zhu1, Qian Kang1, Tao Xu2, Dengxiu Yu3,*, Zhen Wang1

1 School of Cybersecurity, Northwestern Polytechnical University, Xi’an, 710072, China
2 Unmanned System Research Institute, Northwestern Polytechnical University, Xi’an, 710072, China
3 School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi’an, 710072, China

* Corresponding Author: Dengxiu Yu. Email: email

Computers, Materials & Continua 2025, 83(2), 1855-1879. https://doi.org/10.32604/cmc.2025.061913

Abstract

In this study, we present a deterministic convergence analysis of Gated Recurrent Unit (GRU) networks enhanced by a smoothing regularization technique. While GRU architectures effectively mitigate gradient vanishing/exploding issues in sequential modeling, they remain prone to overfitting, particularly under noisy or limited training data. Traditional regularization, despite enforcing sparsity and accelerating optimization, introduces non-differentiable points in the error function, leading to oscillations during training. To address this, we propose a novel smoothing regularization framework that replaces the non-differentiable absolute function with a quadratic approximation, ensuring gradient continuity and stabilizing the optimization landscape. Theoretically, we rigorously establish three key properties of the resulting smoothing -regularized GRU (SL1-GRU) model: (1) monotonic decrease of the error function across iterations, (2) weak convergence characterized by vanishing gradients as iterations approach infinity, and (3) strong convergence of network weights to fixed points under finite conditions. Comprehensive experiments on benchmark datasets-spanning function approximation, classification (KDD Cup 1999 Data, MNIST), and regression tasks (Boston Housing, Energy Efficiency)-demonstrate SL1-GRUs superiority over baseline models (RNN, LSTM, GRU, L1-GRU, L2-GRU). Empirical results reveal that SL1-GRU achieves 1.0%–2.4% higher test accuracy in classification, 7.8%–15.4% lower mean squared error in regression compared to unregularized GRU, while reducing training time by 8.7%–20.1%. These outcomes validate the method’s efficacy in balancing computational efficiency and generalization capability, and they strongly corroborate the theoretical calculations. The proposed framework not only resolves the non-differentiability challenge of regularization but also provides a theoretical foundation for convergence guarantees in recurrent neural network training.

Keywords

Gated recurrent unit; regularization; convergence

Cite This Article

APA Style
Zhu, Q., Kang, Q., Xu, T., Yu, D., Wang, Z. (2025). Deterministic Convergence Analysis for GRU Networks via Smoothing Regularization. Computers, Materials & Continua, 83(2), 1855–1879. https://doi.org/10.32604/cmc.2025.061913
Vancouver Style
Zhu Q, Kang Q, Xu T, Yu D, Wang Z. Deterministic Convergence Analysis for GRU Networks via Smoothing Regularization. Comput Mater Contin. 2025;83(2):1855–1879. https://doi.org/10.32604/cmc.2025.061913
IEEE Style
Q. Zhu, Q. Kang, T. Xu, D. Yu, and Z. Wang, “Deterministic Convergence Analysis for GRU Networks via Smoothing Regularization,” Comput. Mater. Contin., vol. 83, no. 2, pp. 1855–1879, 2025. https://doi.org/10.32604/cmc.2025.061913



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 207

    View

  • 75

    Download

  • 0

    Like

Share Link