D. Vidyabharathi1,*, V. Mohanraj2
Intelligent Automation & Soft Computing, Vol.36, No.3, pp. 2559-2573, 2023, DOI:10.32604/iasc.2023.032255
- 15 March 2023
Abstract For training the present Neural Network (NN) models, the standard
technique is to utilize decaying Learning Rates (LR). While the majority of these
techniques commence with a large LR, they will decay multiple times over time.
Decaying has been proved to enhance generalization as well as optimization.
Other parameters, such as the network’s size, the number of hidden layers, dropouts to avoid overfitting, batch size, and so on, are solely based on heuristics. This
work has proposed Adaptive Teaching Learning Based (ATLB) Heuristic to identify
the optimal hyperparameters for diverse networks. Here we consider three More >