Open Access iconOpen Access

ARTICLE

Ext-ICAS: A Novel Self-Normalized Extractive Intra Cosine Attention Similarity Summarization

P. Sharmila1,*, C. Deisy1, S. Parthasarathy2

1 Deparment of Information Technology, Thiagarajar College of Engineering, Madurai, India
2 Deparment of Data Science, Thiagarajar College of Engineering, Madurai, India

* Corresponding Author: P. Sharmila. Email: email

Computer Systems Science and Engineering 2023, 45(1), 377-393. https://doi.org/10.32604/csse.2023.027481

Abstract

With the continuous growth of online news articles, there arises the necessity for an efficient abstractive summarization technique for the problem of information overloading. Abstractive summarization is highly complex and requires a deeper understanding and proper reasoning to come up with its own summary outline. Abstractive summarization task is framed as seq2seq modeling. Existing seq2seq methods perform better on short sequences; however, for long sequences, the performance degrades due to high computation and hence a two-phase self-normalized deep neural document summarization model consisting of improvised extractive cosine normalization and seq2seq abstractive phases has been proposed in this paper. The novelty is to parallelize the sequence computation training by incorporating feed-forward, the self-normalized neural network in the Extractive phase using Intra Cosine Attention Similarity (Ext-ICAS) with sentence dependency position. Also, it does not require any normalization technique explicitly. Our proposed abstractive Bidirectional Long Short Term Memory (Bi-LSTM) encoder sequence model performs better than the Bidirectional Gated Recurrent Unit (Bi-GRU) encoder with minimum training loss and with fast convergence. The proposed model was evaluated on the Cable News Network (CNN)/Daily Mail dataset and an average rouge score of 0.435 was achieved also computational training in the extractive phase was reduced by 59% with an average number of similarity computations.

Keywords


Cite This Article

APA Style
Sharmila, P., Deisy, C., Parthasarathy, S. (2023). Ext-icas: A novel self-normalized extractive intra cosine attention similarity summarization. Computer Systems Science and Engineering, 45(1), 377-393. https://doi.org/10.32604/csse.2023.027481
Vancouver Style
Sharmila P, Deisy C, Parthasarathy S. Ext-icas: A novel self-normalized extractive intra cosine attention similarity summarization. Comput Syst Sci Eng. 2023;45(1):377-393 https://doi.org/10.32604/csse.2023.027481
IEEE Style
P. Sharmila, C. Deisy, and S. Parthasarathy, “Ext-ICAS: A Novel Self-Normalized Extractive Intra Cosine Attention Similarity Summarization,” Comput. Syst. Sci. Eng., vol. 45, no. 1, pp. 377-393, 2023. https://doi.org/10.32604/csse.2023.027481



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1219

    View

  • 742

    Download

  • 0

    Like

Share Link