Open Access
ARTICLE
Deep Transfer Learning Models for Mobile-Based Ocular Disorder Identification on Retinal Images
1 Department of Multimedia Engineering, Kaunas University of Technology, Kaunas, 44249, Lithuania
2 Department of Computer Science, Landmark University, Omu Aran, 251103, Nigeria
3 Department of Computer Science, Faculty of Information and Communication Sciences, University of Ilorin, Ilorin, 240003, Nigeria
4 Department of Telecommunication Science, University of Ilorin, Ilorin, 230003, Nigeria
5 Department of Library and Information Science, Fu Jen Catholic University, New Taipei City, 24205, Taiwan
6 Department of Computer Science and Information Engineering, Fintech and Blockchain Research Center, Asia University, Taichung City, 41354, Taiwan
7 Department of Electrical and Electronics Engineering, Faculty of Engineering, University of Lagos, Akoka, Lagos, 100213, Nigeria
8 Department of Electrical Engineering and Information Technology, Institute of Digital Communication, Ruhr University, Bochum, 44801, Germany
* Corresponding Author: Cheng-Chi Lee. Email:
Computers, Materials & Continua 2024, 80(1), 139-161. https://doi.org/10.32604/cmc.2024.052153
Received 25 March 2024; Accepted 24 June 2024; Issue published 18 July 2024
Abstract
Mobile technology is developing significantly. Mobile phone technologies have been integrated into the healthcare industry to help medical practitioners. Typically, computer vision models focus on image detection and classification issues. MobileNetV2 is a computer vision model that performs well on mobile devices, but it requires cloud services to process biometric image information and provide predictions to users. This leads to increased latency. Processing biometrics image datasets on mobile devices will make the prediction faster, but mobiles are resource-restricted devices in terms of storage, power, and computational speed. Hence, a model that is small in size, efficient, and has good prediction quality for biometrics image classification problems is required. Quantizing pre-trained CNN (PCNN) MobileNetV2 architecture combined with a Support Vector Machine (SVM) compacts the model representation and reduces the computational cost and memory requirement. This proposed novel approach combines quantized pre-trained CNN (PCNN) MobileNetV2 architecture with a Support Vector Machine (SVM) to represent models efficiently with low computational cost and memory. Our contributions include evaluating three CNN models for ocular disease identification in transfer learning and deep feature plus SVM approaches, showing the superiority of deep features from MobileNetV2 and SVM classification models, comparing traditional methods, exploring six ocular diseases and normal classification with 20,111 images post-data augmentation, and reducing the number of trainable models. The model is trained on ocular disorder retinal fundus image datasets according to the severity of six age-related macular degeneration (AMD), one of the most common eye illnesses, Cataract, Diabetes, Glaucoma, Hypertension, and Myopia with one class Normal. From the experiment outcomes, it is observed that the suggested MobileNetV2-SVM model size is compressed. The testing accuracy for MobileNetV2-SVM, InceptionV3, and MobileNetV2 is 90.11%, 86.88%, and 89.76% respectively while MobileNetV2-SVM, InceptionV3, and MobileNetV2 accuracy are observed to be 92.59%, 83.38%, and 90.16%, respectively. The proposed novel technique can be used to classify all biometric medical image datasets on mobile devices.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.