Open Access
ARTICLE
Crowdsourcing-Based Framework for Teaching Quality Evaluation and Feedback Using Linguistic 2-Tuple
Department of Computer Science, Chengdu University of Information and Technology, Chengdu, 610025, China.
CSIT Department, School of Science, RMIT University, Melbourne, 3058, Australia.
* Corresponding Author: Tao Wu. Email: .
Computers, Materials & Continua 2018, 57(1), 81-96. https://doi.org/10.32604/cmc.2018.03259
Abstract
Crowdsourcing is widely used in various fields to collect goods and services from large participants. Evaluating teaching quality by collecting feedback from experts or students after class is not only delayed but also not accurate. In this paper, we present a crowdsourcing-based framework to evaluate teaching quality in the classroom using a weighted average operator to aggregate information from students’ questionnaires described by linguistic 2-tuple terms. Then we define crowd grade based on similarity degree to distinguish contribution from different students and minimize the abnormal students’ impact on the evaluation. The crowd grade would be updated at the end of each feedback so it can guarantee the evaluation accurately. Moreover, a simulated case is shown to illustrate how to apply this framework to assess teaching quality in the classroom. Finally, we developed a prototype and carried out some experiments on a series of real questionnaires and two sets of modified data. The results show that teachers can locate the weak points of teaching and furthermore to identify the abnormal students to improve the teaching quality. Meanwhile, our approach provides a strong tolerance for the abnormal student to make the evaluation more accurate.Keywords
Cite This Article
Citations
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.