Open Access
ARTICLE
Interpretable and Adaptable Early Warning Learning Analytics Model
1 The University of Newcastle, Sydney, 2000, Australia
2 University of Management and Technology, Lahore, 54770, Pakistan
3 Forman Christian College, Lahore, 54600, Pakistan
* Corresponding Author: Atif Alvi. Email:
(This article belongs to the Special Issue: Machine Learning Empowered Secure Computing for Intelligent Systems)
Computers, Materials & Continua 2022, 71(2), 3211-3225. https://doi.org/10.32604/cmc.2022.023560
Received 12 September 2021; Accepted 13 October 2021; Issue published 07 December 2021
Abstract
Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain. Interpretability makes it easy for the stakeholders to understand the working of these models and adaptability makes it easy to use the same model for multiple cohorts and courses in educational institutions. Recently, some models in learning analytics are constructed with the consideration of interpretability but their interpretability is not quantified. However, adaptability is not specifically considered in this domain. This paper presents a new framework based on hybrid statistical fuzzy theory to overcome these limitations. It also provides explainability in the form of rules describing the reasoning behind a particular output. The paper also discusses the system evaluation on a benchmark dataset showing promising results. The measure of explainability, fuzzy index, shows that the model is highly interpretable. This system achieves more than 82% recall in both the classification and the context adaptation stages.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.