Jun Liu1, Geng Yuan2, Changdi Yang2, Houbing Song3, Liang Luo4,*
CMES-Computer Modeling in Engineering & Sciences, Vol.135, No.2, pp. 1571-1587, 2023, DOI:10.32604/cmes.2022.023195
- 27 October 2022
Abstract The interpretability of deep learning models has emerged as a compelling area in artificial intelligence research. The safety criteria for medical imaging are highly stringent, and models are required for an explanation. However, existing convolutional neural network solutions for left ventricular segmentation are viewed in terms of inputs and outputs. Thus, the interpretability of CNNs has come into the spotlight. Since medical imaging data are limited, many methods to fine-tune medical imaging models that are popular in transfer models have been built using massive public ImageNet datasets by the transfer learning method. Unfortunately, this generates… More >