Open Access iconOpen Access

ARTICLE

Enhancing ChatGPT’s Querying Capability with Voice-Based Interaction and CNN-Based Impair Vision Detection Model

by Awais Ahmad1, Sohail Jabbar1,*, Sheeraz Akram1, Anand Paul2, Umar Raza3, Nuha Mohammed Alshuqayran1

1 College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, 11432, Saudia Arabia
2 School of Computer Science and Engineering, Kyungpook National University, Daegu, 41566, South Korea
3 Department of Engineering, Manchester Metropolitan University, Manchester, M15 6BH, UK

* Corresponding Author: Sohail Jabbar. Email: email

(This article belongs to the Special Issue: Advance Machine Learning for Sentiment Analysis over Various Domains and Applications)

Computers, Materials & Continua 2024, 78(3), 3129-3150. https://doi.org/10.32604/cmc.2024.045385

Abstract

This paper presents an innovative approach to enhance the querying capability of ChatGPT, a conversational artificial intelligence model, by incorporating voice-based interaction and a convolutional neural network (CNN)-based impaired vision detection model. The proposed system aims to improve user experience and accessibility by allowing users to interact with ChatGPT using voice commands. Additionally, a CNN-based model is employed to detect impairments in user vision, enabling the system to adapt its responses and provide appropriate assistance. This research tackles head-on the challenges of user experience and inclusivity in artificial intelligence (AI). It underscores our commitment to overcoming these obstacles, making ChatGPT more accessible and valuable for a broader audience. The integration of voice-based interaction and impaired vision detection represents a novel approach to conversational AI. Notably, this innovation transcends novelty; it carries the potential to profoundly impact the lives of users, particularly those with visual impairments. The modular approach to system design ensures adaptability and scalability, critical for the practical implementation of these advancements. Crucially, the solution places the user at its core. Customizing responses for those with visual impairments demonstrates AI’s potential to not only understand but also accommodate individual needs and preferences.

Keywords


Cite This Article

APA Style
Ahmad, A., Jabbar, S., Akram, S., Paul, A., Raza, U. et al. (2024). Enhancing chatgpt’s querying capability with voice-based interaction and cnn-based impair vision detection model. Computers, Materials & Continua, 78(3), 3129-3150. https://doi.org/10.32604/cmc.2024.045385
Vancouver Style
Ahmad A, Jabbar S, Akram S, Paul A, Raza U, Alshuqayran NM. Enhancing chatgpt’s querying capability with voice-based interaction and cnn-based impair vision detection model. Comput Mater Contin. 2024;78(3):3129-3150 https://doi.org/10.32604/cmc.2024.045385
IEEE Style
A. Ahmad, S. Jabbar, S. Akram, A. Paul, U. Raza, and N. M. Alshuqayran, “Enhancing ChatGPT’s Querying Capability with Voice-Based Interaction and CNN-Based Impair Vision Detection Model,” Comput. Mater. Contin., vol. 78, no. 3, pp. 3129-3150, 2024. https://doi.org/10.32604/cmc.2024.045385



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 998

    View

  • 344

    Download

  • 0

    Like

Share Link