Special Issues
Table of Content

Advances in Regularization Techniques for Deep Learning

Submission Deadline: 01 August 2025 View: 131 Submit to Special Issue

Guest Editors

Prof. Dr. Dae-Ki Kang

Email: dkkang@dongseo.ac.kr

Affiliation: Division of Computer Engineering, Dongseo University, Busan 47011, South Korea

Homepage:

Research Interests: Deep Learning Regularization, Multi-Agent Reinforcement Learning, Hyperparameter Optimization and Network Architecture Search, Automated Machine Learning, Adversarial Machine Learning, Bankruptcy prediction models and financial ratio analysis, Datamining based intrusion detection

图片1.png


Prof. Dr. Sukho Lee

Email: petra@gdsu.dongseo.ac.kr

Affiliation: Division of Computer Engineering, Dongseo University, Busan 47011, South Korea

Homepage:

Research Interests: Image Deconvolution/Restoration, Color Image Compression, Computer Vision, Deep Learning

图片2.png


Summary

Regularization plays a critical role in deep learning by reducing the risk of overfitting in deep neural networks. This special issue aims to explore novel regularization techniques and their applications in enhancing the performance of deep learning models.


We invite contributions that delve into original research and advancements in regularization techniques, with a particular emphasis on the following topics:

- Theoretical Foundations of Deep Learning Regularization: Exploration of the underlying principles that govern regularization methods and their impact on model training.

- Novel Techniques of Deep Learning Regularization: Presentation of innovative regularization methods, including but not limited to those that leverage linear constraints, dropout strategies, and other emerging techniques.

- Performance Evaluation, Comparative Analysis, and Ablation Studies of Deep Learning Regularization: Rigorous evaluations of various regularization approaches, including detailed comparisons and ablation studies that highlight the contributions of each technique.

- Novel Applications and Domains of Deep Learning Regularization: Investigation into how different regularization methods can be applied across diverse models such as convolutional neural networks, recurrent neural networks, transformer, etc.

 

To further enrich the scope of this special issue, we encourage submissions on the following additional topics:

- Integration of Regularization with Other Techniques: Research that investigates the synergistic effects of combining regularization methods with optimization algorithms, data augmentation, and ensemble methods.

- Impact of Regularization on Interpretability and Explainability: Studies focusing on how different regularization techniques affect the interpretability of deep learning models, particularly in critical applications like healthcare.

- Regularization in Transfer Learning and Few-Shot Learning: Contributions that explore how regularization techniques can enhance performance in scenarios with limited data, such as transfer learning and few-shot learning frameworks.

- Challenges and Future Directions in Regularization: Discussions on the current challenges faced in the field of regularization and potential future research directions that could address these issues.

This special issue aims to foster a deeper understanding of innovative regularization methods in deep learning. We look forward to your valuable contributions that will advance the field and inspire future research.


Keywords

Regularization, Dropout, L1 Regularization, L2 Regularization

Share Link