iconOpen Access

ARTICLE

crossmark

Enhanced Adaptive Brain-Computer Interface Approach for Intelligent Assistance to Disabled Peoples

by Ali Usman1, Javed Ferzund1, Ahmad Shaf1, Muhammad Aamir1, Samar Alqhtani2,*, Khlood M. Mehdar3, Hanan Talal Halawani4, Hassan A. Alshamrani5, Abdullah A. Asiri5, Muhammad Irfan6

1 Department of Computer Science, COMSATS University Islamabad, Sahiwal Campus, Sahiwal, 57000, Pakistan
2 Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran, 61441, Saudi Arabia
3 Anatomy Department, Medicine College, Najran University, Najran, 61441, Saudi Arabia
4 Computer Science Department, College of Computer Science and Information Systems, Najran University, Najran, 61441, Saudi Arabia
5 Radiological Sciences Department, College of Applied Medical Sciences and Information Systems, Najran University, Najran, 61441, Kingdom of Saudi Arabia
6 Electrical Engineering Department, College of Engineering, Najran University, Najran, 61441, Saudi Arabia

* Corresponding Author: Samar Alqhtani. Email: email

Computer Systems Science and Engineering 2023, 46(2), 1355-1369. https://doi.org/10.32604/csse.2023.034682

Abstract

Assistive devices for disabled people with the help of Brain-Computer Interaction (BCI) technology are becoming vital bio-medical engineering. People with physical disabilities need some assistive devices to perform their daily tasks. In these devices, higher latency factors need to be addressed appropriately. Therefore, the main goal of this research is to implement a real-time BCI architecture with minimum latency for command actuation. The proposed architecture is capable to communicate between different modules of the system by adopting an automotive, intelligent data processing and classification approach. Neuro-sky mind wave device has been used to transfer the data to our implemented server for command propulsion. Think-Net Convolutional Neural Network (TN-CNN) architecture has been proposed to recognize the brain signals and classify them into six primary mental states for data classification. Data collection and processing are the responsibility of the central integrated server for system load minimization. Testing of implemented architecture and deep learning model shows excellent results. The proposed system integrity level was the minimum data loss and the accurate commands processing mechanism. The training and testing results are 99% and 93% for custom model implementation based on TN-CNN. The proposed real-time architecture is capable of intelligent data processing unit with fewer errors, and it will benefit assistive devices working on the local server and cloud server.

Keywords


1  Introduction

Mentally impaired or disabled people always remain a significant challenge in medical science. With computer science and bioscience collaboration, we can deal with this challenge. To interact with the physical devices and convert brain signals into respective commands is a big challenge in BCI. Much progress is already made in this field of converting human brain signals to mechanical or non-mechanical actions. Brain signals pass from spinal cords to further physical parts to control them [1]. The body’s muscles do not control the movement, but the nerves control it. The human brain is the primary source of gathering input and providing output to the human body. It behaves as a sensor network, in which the control system gathers input from multiple configured sensors and returns a single output.

The BCI objective is to scale input with output. Existing technology converting human thoughts to physical activity or soft form is slower than required. The latency between transferring human thoughts to physical devices is high. BCI variants use different techniques to record multiple types of signals for classification purposes [2]. Significant types of BCI systems are dependability, invasiveness, and synchronization. All these systems require a different type of electrode plantation for the signal acquisition process. Some of them are surgery-based electrode plantations, while others are based on a non-surgical plantation of the electrodes.

For mentally and physically impaired people, BCI aims to gather signals directly from the brain and physical organs to control the physical devices or communicate with people or machines [3]. For disabled people, the construction of wheelchairs and the communication mechanism for controlling, the control structure is based on the input from the brain, and then after some decision-making process, the signals are used to control the physical machines or virtual machines [2]. Ilyas et al. describe EEG signal processing techniques for feature extraction and dimensionality reduction [4]. Principle component analysis focuses on the feature extraction from the signals. This is an unsupervised machine learning technique. The benefit of using this technique is that the resultant data after transformation do not correlate with data dimensions. All dimensions of the data are totally orthogonal or mutually uncorrelated.

The need to use electroencephalogram (EEG) technology is for the betterment of biomedical and many other fields [5]. People with speaking disabilities and other medical disabilities cannot communicate with doctors and other medical staff [6]. Ventilators provide life support to people with critical condition and have some disability issues. EEG is a life-changing technology for people with disabilities. People with no hands can control robotic hands for assistance, and people with speaking disabilities can speak with the help of this technology [7]. With the help of Electromyography (EMG) technology, the muscular activities of the body can be obtained in the form of electrical signals. Electrodes can be inserted into the body for recording electrical activities that can be visualized using different tools like oscilloscopes [8]. The signal can change when a person concentrates on the specified muscle or moves a muscle. The nerve conditioning can be monitored by focusing on the more conceptually or forcefully [9]. With this help, we can determine the movement and speed of the concerned muscle by analyzing the speed of the electrical impulses. Non-invasive techniques are popular due to their convenience and security. At the same time, because the skull dampens the signals, disperses, and blurs the neuron-generated electromagnetic waves, non-invasive signal recorders produce a low signal resolution [10].

The central concept of this paper is to provide the BCI architecture for people with physical disabilities. Our BCI system aims to operate independently in real-time with minimum data flow latency. In previous research the researches mainly focuses on the applications and theories for BCI and fMRI, but in this paper is to provide the architecture for synchronous communication between different modules in a system for BCI. Therefore, the proposed system fulfills the following objectives:

1.    Our BCI system operates with fewer errors, is more user-friendly, and allows for a faster data flow rate.

2.    Our BCI design is asynchronous (self-paced). The majority of existing BCI is synchronous, meaning that users can only interact with the app at specific periods determined by the system. Unlike synchronous BCI, self-paced BCI can provide commands at any time, allowing it to issue more commands simultaneously.

3.    Develop a real-time approach for brain signal capturing and classifying using a machine learning model.

4.    Created a real-time server to visualize the current system state and machine learning classified results regarding the current mental state.

The remaining sections of the paper are as follows: Section 2 includes the related work, Section 3 explains the structure and methodology of the proposed BCI system, Section 4 illustrates the results, and finally concludes the paper with a future direction.

2  Related Work

Brain-computer interaction is an emerging field in every context. Assisting disabled people to need to communicate with 3D imaging, gaming, converting thoughts to text, and requires hardware and software architecture [11]. Li et al. [12] work on a multi-feature fusion approach for the EEG signals with fuzzy entropy. The mechanism of Fuzzy entropy and principal component analysis (PCA) for dimensionality reduction and accuracy estimation is described. The accuracy achieved with the help of fuzzy entropy on the random forest was 88.26%. While on the other hand, the accuracy of the random forest of hierarchal entropy was higher at 97.66%.

Chakladar et al. [13] record the electroencephalographic activity for the cursor movement in a 2d context. The system receives the brain signals from the EEG machine and converts the command to some relative commands for defining the actual control structure. Furthermore, the discussed problem is the signal transfer rate because the current transfer rate of BCIs is 25 b/min. Signals are the electroencephalographic activity inside the brain. Garro et al. [14] describe the basic architecture of BCI from the primary input of EEG and the final conversion to relative commands. The architecture consists of a complete flow of BCI. Starting from initial electrical signals, which then pass-through amplification steps. The feature extraction steps are applied to the signals. The individual result then passes to the featured translator, which provides the relative commands. The commands then pass to the control interface for controlling a device.

Yu et al. [15] work on a real-time collaboration with real-time applications for people with disabilities. The communication mechanism, environmental control, and creative expressions are evaluated concisely to achieve the optimal interaction between the real-time application and BCI. The biofeedback from the human brain helps control real-time applications; like on amazon, a person wants to search and buy a product, but a person with a disability cannot do this task without any other source. But with the BCI using EEG, human thoughts can be converted to target commands that help the person easily interact with the real-time application.

Krol et al. [16] focus on the passive BCI by interpreting human intensions, situational interaction, and emotional state. Real-time brain signal decoding is a cognitive monitoring approach for gaining information about the current ongoing cognitive human or user state. The paper focuses on the use and benefits for healthy and disabled people. To conclude the basic idea of passive BCI, we can say that the collective information and the hybrid BCI technology benefit healthy and disabled people.

Lv et al. [17] improve the BCI interface’s working efficiency with some verification techniques. While working with common spatial patterns, the results of some machine classification models were verified. The study shows the difference between actual and imaginary context of hand movements in term of accuracy with respect to time. The cortical response of the human brain varies in the actual and imaginary context.

Isa [18] works with the BCI basic workflow for the signal classification model. As in traditional machine learning workflow, the data gathering process and preprocessing and input preparation are the same as is in the BCI. The signal transformation with Fast Fourier Transformation on the incoming EEG signals. To minimize the high dimensionality of data, the linear discriminant analysis is used for better understanding and minimizing the unnecessary feature from the EEG signals data. The paper’s central focus was defining workflow using different machine learning techniques like Logistic regression, Naive Bayes, and Support Vector Machine (SVM).

Vourvopoulos et al. [19] work on the EEGlass prototype for an interactive brain-computer interface; this prototype consists of wearable glasses with attached electrodes for recording brain activities. The essential signal filtration for the artifact removal and removal of high band-pass signals are applied so the only needed bandwidth signals can be used in the learning and testing process [20]. Achieved accurate results of the resting state of the person and the open eyes state. The whole work describes the benefits of the BCI model and the signals composition process for the BCI architecture, which deals with brain signals and asynchronies/synchronous signals [21]. Although there are two major difficulties frequently faced by a practitioner [22], there are other forms of unattended learning: a clustering that requires group identification in the data and an estimate of densities, which includes summarizing the data distribution.

In [23], semi-controlled learning is monitored, with relatively few examples of labeled examples and many unlisted examples in the training data. Problems with vast amounts of input (X) and just certain data are referred to as (Y) semi-controlled learning problems. These are both controlled and unattended learning issues. In [24], the usage or inspiration from unsupervised approaches, such as clustering and density estimation, may necessitate the successful utilization of specifying data. After the discovery of groups or patterns, supervised methods or inspirations from supervised learning may next be utilized to identify unlabeled instances.

The statistical analysis revealed that Coiflets 1 was the best suited for the correct categorization of EEG signals in the wavelet family [25]. In this study, authors have tried to enhance computer efficiency by selecting the best appropriate wavelet function for EEG signal processing with less computational time effectively and precisely. Machine learning and deep learning algorithms are being used to classify signals from raw structures [26]. The prediction and classification accuracy and speed of deep learning models are so satisfying. The mathematical foundation of the machine learning models is much strong to classify signals of different shapes and different kinds.

Alsharif et al. [27] purposed the concept of neuro-marketing using EEG, fRMI and eye-tracking apart from traditional marketing strategies. The study mainly focuses on the consumer intentions and focuses on the advertisement of any business. Consumer mental state and reactions provide valuable information for the intention towards business, which helps them set their business strategies for price and development of new brands and products.

3  Methodology

The system model of the proposed real-time BCI architecture with minimum latency for command actuation has been explained in given section. Neuro-sky mind wave device has been used to transfer the data to our implemented server for command propulsion. Furthermore, Think-Net CNN (TN-CNN) architecture has been proposed to recognize the brain signals and classify them into six primary mental states for data classification. The risk of error in medical tasks is not negotiate-able therefore we have carefully designed each module of BCI.

3.1 Data Acquisition

The first phase of the system is to record and convert live brain signals into raw data. Incoming electric pulses from the EEG headset in the form of analog values are recorded with time series. Time series defines the order of different chunks of signals recorded with their time and duration. The duration of each data chunk is defined, which may vary from 3 to 5 s. During this signal recording phase, the actual structure of data remains the same.

The person’s (both male and female) brain signals are only classified into six main categories (Hand Start, Grasping, Lift, Hold, Replace, Release). There are 12 subjects (4 males and 8 females) in total, 10 series of trials for each subject, and approximately 30 trials within each series. The number of trials varies for each series. The training set contains the first 8 series for each subject. The test set contains the 9th and 10th series. Two files contain the EEG raw data, and the second contains the corresponding event files for the trails. For training data, event files are provided, but for test files or test data, event files are not provided. In a machine learning context, the training data contains the corresponding labels, but test data does not contain the corresponding labels.

There are three main modules for the data recording of hand movement and grasping the items. The physical object connected with the Neuro-sky device for signal recording phase is used—the human arm connected with the EMG module for all muscular activities of the human arm. The last part is the EEG signal module consisting of 32 EEG channels, each with a sampling frequency rate of 500 HZ.

3.2 Data Preparation

Recorded brain activities contain some noise that is not necessary for the underlying process of classification. To categorize the current brain event/activity in a single chunk of a signal is usually not possible because the ongoing brain activity is not visible in a single trail. To achieve accurate classification, we need an average of multiple trails of recorded brain activities within a familiar context by maintaining time series. To remove noise from the data, we work on event-related potential (ERP) signal processing technique to reduce the signal-to-noise ratio as depicted in Fig. 1.

images

Figure 1: Denoising with DWT

Authors investigated background noise effect on actual signals. Same as this, the other brain activity creates collusion in signals. As the experiments prove, if the person moves their tongue during the signal recording phase, this can affect the event classification process [28]. This preprocessing step aims to clean the artifacts from the data before applying classification approaches.

3.3 Think-Net Convolutional Neural Network

Finding patterns in the data is impossible without any hypothesis or an automated technique. Data with sparse dominations, with no correlation in the different dimensions of the data, and data with very few change factors are impossible to classify regarding their types. Multilayer convolutional neural network is used for the classification process of brain signals. The signals consist of multidimensional continuous values with their corresponding classes. The best and most implemented approach is machine learning to classify processed and raw signals.

This part of the research work described the Think-Net Convolutional Neural Network (TN-CNN). It recognizes the brain signals and classifies them into six primary mental states. The second part of the research relates to implementing the server for intelligent BCI. In TN-CNN, the first layer is the signal input layer. This layer takes an input shape of 20, 32, 1 with 64 filters, 7 kernel size and rectified linear unit (ReLu) activation function. It specifies the signal size by using the input size argument. The signal size corresponds with total signal is also with 32 lengths. instances and several (32) electrodes taking part in the data recording process and finalizing the training process.

The convolutional layer (convent layer) is used for signal feature extraction. This layer consists of different filters to extract signal patterns. The batch normalization layer provides CNN with zero inputs as mean or variance. This layer provides strength to the CNN. After the batch normalization layer, there is a non-linear layer. This layer handles the non-linear values of the convent and input tensors. After that, the Re-Lu layer as an activation function and max-pooling layer has been used. This layer is frequently used after every convolutional layer in CNN.

Various pooling layers perform different functions like max-polling, min-polling, and average polling. The soft-max layer makes data for the classification layer on CNN. This layer uses data from the pooling layer and the convolutional layer. The fully connected (FC) layer does the actual classification. This layer takes input from all the layers and passes it through the network. The classification layer computes the actual class entropy loss for multi-class problems. It matches the classes with their relevant class. The structure of TN-CNN has been shared in Fig. 2.

images

Figure 2: Structure of TN-CNN

The model implemented in this paper consists of 10 total layers. Optimizer and loss functions are Adam and binary cross-entropy, respectively. The Cross-Entropy (CE) cost function can be calculated with the input values in the input tensor and the total number of output values or data points.

CE=1x k=0x[zlnb+(1z)ln(1b)]  (1)

In Eq. (1), k and z are the values we need to map with input and output whereas X represents number of classes, and b is the SoftMax probability of each class. In this technique, the issue of adjusting the learning rate value is resolved. In this manner, there is no need to adjust the learning rate manually. In the following equations, we explain the whole process related to the optimization techniques and how the learning rate automatic update occurs during training sessions.

For Accumulated Gradient [29], Gt concerning time stamp, time t is any time instant, not the time interval, ρ represents bulk of signals, as shown in Eq. (2).

A[G2]t=ρCE[G2]t1+(1ρ)Gt2 (2)

The accumulated value updates and computes the individual update of Delta (d) value. The change factor for adjusting the learning rate is shown in Eqs. (3), (4). The automatic mechanism helps to automatically adjust the learning rate without any manual requirements or manual computation.

Δdt=CE.[Δd2]t+CE.[G2]t+ (3)

E[Δdt]t=ρCE[Δx2]t1+(1ρ)Δxt2 (4)

To assign change factor of learning rate after finding appropriate value as shown in Eq. (5).

xt+1=xt+Δxt (5)

3.4 Server Implementation for Intelligent BCI

Our proposed model is to gather bio-signals from the human brain resulting from some specific cognitive, sensory, or motor event. The brain signals are the electrophysiological response to the stimulus. Our proposed system helps to record the current human brain activities like the electric charge in the human cells and tissues. Recorded data are further passed to deep learning model as an input argument for better understanding and classification purpose. The proposed architecture is described in a real-time context with the help of synchronous communication between system modules. The working methodology of the entire system with a single module evaluation has been shown in Fig. 3.

images

Figure 3: Real-time functional architecture and methodology for EEG signal processing and data flow

3.4.1 Signal Classification

Classification refers to relating an entity or object with a class according to a given feature set. The feature set is the detail about some required attributes of the object that can define the type of object. Attributes can be referred to as the property of the object. This prediction process is not the traditional approach of directly matching one object with another. TN-CNN helps us classify the objects by feeding some input to the model. It needs strength training before the classification step. The classification accuracy is dependent on the quality of training of the model.

To classify the brain signals we need a pre-trained machine learning model for a real-time classification result to achieve the approximate goal of signal classification. Before the classification, the model goes through the training phase. During this phase, the current human state of mind with their brain signals are recorded with the time series. After removing different artifacts and noise the data with the state of mind according to the entity is applied as input to the model for training purposes. After multiple trials of training, the model achieves the ability to classify the given feature set with the class accordingly. By keeping in mind the tensor at the time of training need to be the same tensor at the time of testing or classification. The recorded signal needs to be pre-processed at both training and testing time for the compatibility of tensor at both training and testing time.

This classification result can interpret the human intentional, situational, and emotional state. This approach has overcome many problems with the classification process of brain signals because the signals of the brain are recorded with multiple variations that are not computable with generic approaches like ERP. After multiple training trials, the model can accordingly classify the given feature set with the class. Keeping in mind, the tensor at the training time needs to be the same tensor at testing or classification. The recorded signal needs to be preprocessed at both training and testing time.

3.4.2 Central Integrated Server

Servers are used to keep files and distribute data from a single source to multiple end nodes. Future compatibility of the system need to be wired and wireless and can access anywhere by remaining within and out of context. For the real-time integration of modules for end-to-end communication, the centrally integrated sever help to distribute the data. To distribute data in different modules of purposed architecture, we have implemented a client server-based technique to distribute data with end nodes.

The proposed technique provides batter BCI architecture for real-time BCI integration with hardware and software nodes. Current BCI architectures are working in a specific context and do not provide the compatibility for real-time collaboration of BCI within a context and out of context. The signal decoding process of string encoding and web techniques are standard with the module integration for data delivery between modules. The purpose of architecture is not just to send and receive data between modules but to keep track of the data and provide a guaranteed data flow. This central integration can be implemented with a local network and over the Internet with basic web techniques like port forwarding.

3.4.3 Central Control

As with other client-server architecture, the central control hub is necessary for ensuring ongoing activities in the system. The log details and results can be compiled in the central control and data distribution to the associated modules. Error or any mishaps in the system are corrected in the central control for better collaboration of all modules. As every system has limitations, this architecture depends entirely on central control. If the central control fails, all other modules stop working. To overcome this dependency, we continue our research to make the nodes independent from a single source. This minimizes the possibility of a sudden system shutdown probability.

3.4.4 Synchronous Mode End Devices

Synchronous communication is the way of communication between one or more nodes connected and sharing some data in a real-time context, like live video chat. Our proposed system communicates in real-time, so we need to collaborate with the respective devices in synchronous mode. Each connected device shares the log tree with a central server for updating and showing the current system state. Every connected device can request the system’s status by requesting a log tree from the server. This system status helps all the devices to maintain their state. Every node maintains its local log detail for transferring the logs to the server, and the overall system’s logs can be maintained at the server end. The hardware module can be configured in a real-time environment but not yet tested and verified system architecture work in synchronous mode to act according to the user brain input. According to the model training, some specific commands can be integrated with the hardware module. The commands run and actuate the process according to the incoming commands.

4  Results and Discussion

For training and testing, real-time signals are grabbed for constructing the dataset. A total of 12 subjects’ data were used for training and 2 subjects for testing purposes, with 30 trials for each subject. The TN-CNN is fit for the training signals in 50 epochs. Each epoch iterates with a total of all training signals. Training loss was around 0.64 in the first epoch, which continuously decreases with every epoch and ends at around 0.082. Training accuracy ends at 0.97, which starts with 0.071 in the first epoch. The training accuracy, loss, and MSE graph showed in Figs. 46, respectively.

images

Figure 4: Training accuracy

images

Figure 5: Training loss

images

Figure 6: Training MSE

During the testing phase, 5000 signal tensors are generally mapped with 6 classes for better understating of results. The mappings of all 6 different classes are mapped with Hand start, Grasping, Lift, Hold, Replace and Release. The model predicts the actual signal type with their corresponding classes. High precision, recall, f1-score, and support are obtained at the testing time are shown in Table 1.

images

With a total of 5000 signal tensors, testing was applied and got the highest accuracy of 0.93. This is because of proper pre-processing and precise signal normalization techniques. After proper testing and input signals, the selection of model layers and filters at each layer was made. Testing accuracy shows how accurately the model classifies the incoming signals corresponding to their labels.

4.1 Confusion Matrix

Misclassification of signals by a trained machine learning model is defined as a signal of type A wrongly classified as type B. To show the total misclassification and correct classification, we have shown the confusion matrix. The numbers in the boxes of the confusion matrix show the total number of signals for classification and missed classification values. The graphical representation is shown in Fig. 7.

images

Figure 7: Graphical representation of confusion matrix

4.2 Pre-Trained ML-Model

Data collected from the human scalp can be directly accessed using the Neuro-Sky Kit. A persistent connection to maintain the data integrity and the server properties, a data scraper is implemented with the multi-threading technique to enhance the central thread efficiency. The required data called and stored by adding a callback, as it is the primary purpose of threads. The python, as mentioned earlier, server code is implemented and tested with the end devices and data grabber entity on the local server. The hardware module is used in our system architecture work in synchronous mode to act according to the user’s input. According to the model training, specific commands can be integrated with the hardware module. The commands run and actuate the process according to the incoming commands.

4.3 Performance Evaluation Matrix

For the validation, an evaluation process of the implemented model testing process for the model is executed. The preliminary performance evaluation matrixes are the time and integrity of the predicted data. We show the model training and testing evaluation based on accuracy, MSE, and loss for training and testing, respectively. The performance of the prior system model is compared to our new proposed model. The criteria listed below assess the accuracy and other performance characteristics from Eqs. (6)(8).

Precision=True Positive(True Positive+False Positive)×100% (6)

Recall=True Posoitive(True Positive+False Negaitive)×100% (7)

Accuracy=(True Positive+True Negaitive)(True Positive +True Negaitive +False Positive+False Negaitive)×100% (8)

This great precision was accomplished using a high-quality system with graphics processing unit (GPU) support and a high signal depth fidelity depth map. High-depth signals are used to discover the object skeletons and build the depth map for the respective signals. TN-CNN using a redesigned layer and activation function sequence while keeping the previous layer output in mind for backpropagation.

The implementation of this system’s principal purpose is to classify human thoughts in real-time. Our proposed and built structure uses a GPU-based system with a high depth of output signals and a GPU-based central unit for acquiring and transmitting control signals to the base system units. The information is transferred from the base unit to the central unit. Enhanced machine learning models and system modules with sufficient support to function in a specified manner were used to create this whole system architecture and outcomes with a meager error rate. A fair comparison of the proposed and current work is shown in Table 2. The existing models accuracy achieved using a simple convolutional neural network, with butterworth, low-pass Filtering was 92%, and 82.1% respectively [30,31]. Our proposed and implemented model in this paper achieved an accuracy of 93%. Our primary focus is to provide an efficient and collaborative environment for the BCI. Using all best practices regarding the implementation of persistent servers, we achieved all the required performance parameters.

images

5  Conclusion

BCI is an emerging technology and very helpful for people with major and minor disabilities. BCI technology contributes a lot to biosciences. We are finding patterns in brain signals data with high domination set using deep learning techniques for integrated, efficient, and time saving compared to the generic machine learning techniques like regression-based models. Our proposed model with data preprocessing steps applied and the refined model structure helps find the desired patterns in the data. The communication architecture with a central integrated server ensures system integrity and communication integrity to maintain the system’s active services state. Our proposed system’s overall responsibility is to provide a standard architecture for intelligent BCI. With the system’s successful implementation, the best results are 93% accurate classification of brain signals. Besides this model training and testing accuracy, the central framework integrity and efficiency parameters are achieved as proposed. This implementation provides a very effective platform for new BCI technologies to collaborate with this implemented architecture. In our future task list, we will work on more efficient data preprocessing techniques, improve the server response time, and, more importantly, the real-time physical device integration to provide more efficient system architecture.

Limitation and Future Research

Over 1 billion people living their life with disability according to an estimate by World Health Organization (WHO). To create assistive devices for disabled peoples this purposed architecture will be very helpful as the Robot Operating System and other non-BCI solutions are perfectly serving the propose. This purpose system will help to create and support future systems and tools for disabled people with 100% failure safety mechanism. The classification accuracy is totally dependent on the machine learning model, so error prevention is not the responsibility of the system architecture. Future research may focus on the machine learning model error prevention mechanism for more controlled behavior.

Funding Statement: Authors would like to acknowledge the support of the Deputy for Research and Innovation-Ministry of Education, Kingdom of Saudi Arabia for funding this research through a project (NU/IFC/ENT/01/014) under the institutional funding committee at Najran University, Kingdom of Saudi Arabia.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

    1. G. N. Ranky and S. Adamovich, “Analysis of a commercial EEG device for the control of a robot arm,” in Proc. 2010 IEEE 36th Annual Northeast Bioengineering Conf. (NEBEC), New York, NY, USA, pp. 1–2, 2010. [Google Scholar]

    2. R. A. Ramadan and A. V. Vasilakos, “Brain computer interface: Control signals review,” Neurocomputing, vol. 223, no. 5, pp. 26–44, 2017. [Google Scholar]

    3. A. Korik, R. Sosnik, N. Siddique and D. Coyle, “3D hand motion trajectory prediction from EEG mu and beta bandpower,” Progress in Brain Research, vol. 228, no. 5, pp. 71–105, 2016. [Google Scholar]

    4. M. Z. Ilyas, P. Saad and M. I. Ahmad, “A survey of analysis and classification of EEG signals for brain-computer interfaces,” in Proc. 2nd Int. Conf. on Biomedical Engineering (ICoBE), Penang, Malaysia, pp. 1–6, 2015. [Google Scholar]

    5. D. Tan and A. Nijholt, “Brain-computer interfaces and human-computer interaction,” in Brain-Computer Interfaces, London: Springer, 2010. https://doi.org/10.1007/978-1-84996-272-8. [Google Scholar]

    6. R. Chatterjee, T. Maitra, S. K. Hafizul Islam, M. M. Hassan, A. Alamri et al., “A novel machine learning based feature selection for motor imagery EEG signal classification in internet of medical things environment,” Future Generations Computer Systems, vol. 98, no. 9, pp. 419–434, 2019. [Google Scholar]

    7. C. L. Pulliam, S. R. Stanslaski and T. J. Denison, “Industrial perspectives on brain-computer interface technology,” Handbook of Clinical Neurology, vol. 168, no. 1, pp. 341–352, 2020. [Google Scholar]

    8. Q. Bai and K. D. Wise, “Single-unit neural recording with active microelectrode arrays,” IEEE Transactions on Bio-Medical Engineering, vol. 48, no. 8, pp. 911–920, 2001. [Google Scholar]

    9. B. S. Oken, “Filtering and aliasing of muscle activity in EEG frequency analysis,” Electroencephalography and Clinical Neurophysiology, vol. 64, no. 1, pp. 77–80, 1986. [Google Scholar]

  10. J. -H. Cho, J. -H. Jeong, K. -H. Shim, D. -J. Kim and S. -W. Lee, Book Classification of Hand Motions within EEG Signals for Non-Invasive BCI Based Robot Hand Control, Japan: Miyazaki, pp. 515–518, 2018. [Google Scholar]

  11. A. T. Azar, V. E. Balas and T. Olariu, “Classification of EEG-based brain–computer interfaces,” Advanced Intelligent Computational Technologies and Decision Support Systems, vol. 486, no. 1, pp. 97–106, 2014. [Google Scholar]

  12. F. Li, Y. Fan, X. Zhang, C. Wang, F. Hu et al., “Multi-feature fusion method based on EEG signal and its application in stroke classification,” Journal of Medical Systems, vol. 44, no. 2, pp. 1–11, 2019. [Google Scholar]

  13. D. D. Chakladar and S. Chakraborty, “Multi-target way of cursor movement in brain computer interface using unsupervised learning,” Biologically Inspired Cognitive Architectures, vol. 25, no. 3, pp. 88–100, 2018. [Google Scholar]

  14. F. Garro and Z. McKinney, “Toward a standard user-centered design framework for medical applications of brain-computer interfaces,” in Proc. 2020 IEEE Int. Conf. on Human-Machine Systems (ICHMS), Rome, Italy, pp. 1–3, 2020. [Google Scholar]

  15. Y. Yu, Z. Zhou, E. Yin, J. Jiang, J. Tang et al., “Toward brain-actuated car applications: Self-paced control with a motor imagery-based brain-computer interface,” Computers in Biology and Medicine, vol. 77, no. 10, pp. 148–155, 2016. [Google Scholar]

  16. L. R. Krol and T. O. Zander, “Passive BCI-based neuroadaptive systems,” in Proc. 7th Graz Brain-Computer Interface Conf., Graz, Austria, pp. 1–7, 2017. [Google Scholar]

  17. Z. Lv, L. Qiao, Q. Wang and F. Piccialli, “Advanced machine-learning methods for brain-computer interfacing,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 18, no. 5, pp. 1688–1698, 2021. [Google Scholar]

  18. N. M. Isa, “Motor imagery classification in brain computer interface (BCI) based on EEG signal by using machine learning technique,” Bulletin of Electrical Engineering and Informatics, vol. 8, no. 1, pp. 269–275, 2019. [Google Scholar]

  19. A. Vourvopoulos, E. Niforatos and M. Giannakos, “EEGlass: An EEG-eyeware prototype for ubiquitous brain-computer interaction,” in Proc. 2019 ACM Int. Joint Conf. on Pervasive and Ubiquitous Computing and Proc. of the 2019 ACM Int. Symp. on Wearable Computers, New York, NY, United States, pp. 647–652, 2019. [Google Scholar]

  20. P. Jahankhani, V. Kodogiannis and K. Revett, “EEG signal classification using wavelet feature extraction and neural networks,” in Proc. IEEE John Vincent Atanasoff 2006 Int. Symp. on Modern Computing (JVA’06), Sofia, Bulgaria, pp. 120–124, 2006. [Google Scholar]

  21. I. A. Fouad, F. E. -Z. M. Labib, M. S. Mabrouk, A. A. Sharawy and A. Y. Sayed, “Improving the performance of p300 BCI system using different methods,” Network Modeling and Analysis in Health Informatics and Bioinformatics, vol. 9, no. 1, pp. 1–13, 2020. [Google Scholar]

  22. G. -J. Kim and J. -S. J. J. O. D. C. Han, “Unsupervised machine learning based on neighborhood interaction function for BCI (Brain-Computer Interface),” Journal of Digital Convergence, vol. 13, no. 8, pp. 289–294, 2015. [Google Scholar]

  23. J. E. van Engelen and H. H. Hoos, “A survey on semi-supervised learning,” Machine Learning, vol. 109, no. 2, pp. 373–440, 2020. [Google Scholar]

  24. X. Zhu and A. B. Goldberg, “Introduction to semi-supervised learning,” Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 3, no. 1, pp. 1–130, 2009. [Google Scholar]

  25. D. Li, H. Zhang, M. S. Khan and F. Mi, “A self-adaptive frequency selection common spatial pattern and least squares twin support vector machine for motor imagery electroencephalography recognition,” Biomedical Signal Processing and Control, vol. 41, no. 1, pp. 222–232, 2018. [Google Scholar]

  26. A. Subasi and M. Ismail Gursoy, “EEG signal classification using PCA, ICA, LDA and support vector machines,” Expert Systems with Applications, vol. 37, no. 12, pp. 8659–8666, 2010. [Google Scholar]

  27. A. H. Alsharif, N. Z. Md Salleh and R. Baharun, “Neuromarketing: The popularity of the brain-imaging and physiological tools,” Neuroscience Research Notes, vol. 3, no. 5, pp. 13–22, 2021. [Google Scholar]

  28. A. Bashashati, M. Fatourechi, R. K. Ward and G. E. Birch, “A survey of signal processing algorithms in brain-computer interfaces based on electrical brain signals,” Journal of Neural Engineering, vol. 4, no. 2, pp. 32–57, 2007. [Google Scholar]

  29. J. Hermans, G. Spanakis and R. Möckel, “Accumulated gradient normalization,” in Proc. Asian Conf. on Machine Learning, Seoul, Korea, PMLR, pp. 439–454, 2017. [Google Scholar]

  30. S. Park, H. -S. Cha, J. Kwon, H. Kim and C. -H. Im, “Development of an online home appliance control system using augmented reality and an SSVEP-based brain-computer interface,” in Proc. 8th Int. Winter Conf. on Brain-Computer Interface (BCI), Gangwon, Korea (Southpp. 1–2, 2020. [Google Scholar]

  31. G. Huve, K. Takahashi and M. Hashimoto, “Brain-computer interface using deep neural network and its application to mobile robot control,” in Proc. IEEE 15th Int. Workshop on Advanced Motion Control (AMC), Tokyo, Japan, pp. 169–174, 2018. [Google Scholar]


Cite This Article

APA Style
Usman, A., Ferzund, J., Shaf, A., Aamir, M., Alqhtani, S. et al. (2023). Enhanced adaptive brain-computer interface approach for intelligent assistance to disabled peoples. Computer Systems Science and Engineering, 46(2), 1355-1369. https://doi.org/10.32604/csse.2023.034682
Vancouver Style
Usman A, Ferzund J, Shaf A, Aamir M, Alqhtani S, Mehdar KM, et al. Enhanced adaptive brain-computer interface approach for intelligent assistance to disabled peoples. Comput Syst Sci Eng. 2023;46(2):1355-1369 https://doi.org/10.32604/csse.2023.034682
IEEE Style
A. Usman et al., “Enhanced Adaptive Brain-Computer Interface Approach for Intelligent Assistance to Disabled Peoples,” Comput. Syst. Sci. Eng., vol. 46, no. 2, pp. 1355-1369, 2023. https://doi.org/10.32604/csse.2023.034682


cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1199

    View

  • 589

    Download

  • 0

    Like

Share Link