Open Access iconOpen Access

ARTICLE

crossmark

An Efficient Long Short-Term Memory Model for Digital Cross-Language Summarization

by Y. C. A. Padmanabha Reddy1, Shyam Sunder Reddy Kasireddy2, Nageswara Rao Sirisala3, Ramu Kuchipudi4, Purnachand Kollapudi5,*

1 Department of CSE, B V Raju Institute of Technology, Narsapur, Medak, T.S, 502 313, India
2 Department of IT, Vasavi College of Engineering, Hyderabad, T.S, 500089, India
3 Department of CSE, K.S.R.M College of Engineering, Kadapa, A.P, 516003, India
4 Department of IT, C.B.I.T, Gandipet, Hyderabad, Telangana, 500075, India
5 Department of CSE, B V Raju Institute of Technology, Narsapur, Medak, T.S, 502 313, India

* Corresponding Author: Purnachand Kollapudi. Email: email

Computers, Materials & Continua 2023, 74(3), 6389-6409. https://doi.org/10.32604/cmc.2023.034072

Abstract

The rise of social networking enables the development of multilingual Internet-accessible digital documents in several languages. The digital document needs to be evaluated physically through the Cross-Language Text Summarization (CLTS) involved in the disparate and generation of the source documents. Cross-language document processing is involved in the generation of documents from disparate language sources toward targeted documents. The digital documents need to be processed with the contextual semantic data with the decoding scheme. This paper presented a multilingual cross-language processing of the documents with the abstractive and summarising of the documents. The proposed model is represented as the Hidden Markov Model LSTM Reinforcement Learning (HMMlstmRL). First, the developed model uses the Hidden Markov model for the computation of keywords in the cross-language words for the clustering. In the second stage, bi-directional long-short-term memory networks are used for key word extraction in the cross-language process. Finally, the proposed HMMlstmRL uses the voting concept in reinforcement learning for the identification and extraction of the keywords. The performance of the proposed HMMlstmRL is 2% better than that of the conventional bi-direction LSTM model.

Keywords


Cite This Article

APA Style
Padmanabha Reddy, Y.C.A., Kasireddy, S.S.R., Sirisala, N.R., Kuchipudi, R., Kollapudi, P. (2023). An efficient long short-term memory model for digital cross-language summarization. Computers, Materials & Continua, 74(3), 6389-6409. https://doi.org/10.32604/cmc.2023.034072
Vancouver Style
Padmanabha Reddy YCA, Kasireddy SSR, Sirisala NR, Kuchipudi R, Kollapudi P. An efficient long short-term memory model for digital cross-language summarization. Comput Mater Contin. 2023;74(3):6389-6409 https://doi.org/10.32604/cmc.2023.034072
IEEE Style
Y. C. A. Padmanabha Reddy, S. S. R. Kasireddy, N. R. Sirisala, R. Kuchipudi, and P. Kollapudi, “An Efficient Long Short-Term Memory Model for Digital Cross-Language Summarization,” Comput. Mater. Contin., vol. 74, no. 3, pp. 6389-6409, 2023. https://doi.org/10.32604/cmc.2023.034072



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 812

    View

  • 566

    Download

  • 1

    Like

Share Link