Submission Deadline: 28 February 2025 View: 745 Submit to Special Issue
Prof. Lianbo Ma, Northeastern University, China
Prof. Yan Pei, the University of Aizu, Japan
Prof. Shi Cheng, Shaanxi Normal University, China
Prof. Chaomin Luo, Mississippi State University, USA
Deep neural networks have demonstrated substantial promise in a wide range of real-world applications, primarily owing to their intricate architectures developed by domain experts. Nevertheless, the architectural design process often proves labor-intensive. These challenges have placed significant constraints on the further advancement of deep neural networks, consequently fueling the emergence of Neural Architecture Search (NAS). The architectures designed by NAS have recently exhibited superior performance in many tasks compared to manually designed counterparts, consequently gaining traction in the deep learning field.
Specifically, NAS commences by defining a search space encompassing all potential architectures. It subsequently employs a well-crafted search strategy to identify the optimal architecture. Throughout the search process, NAS must assess the performance of each explored architecture to guide the search strategy effectively. The NAS problem is inherently challenging due to the presence of multiple challenges, such as the complex constraints, discrete representations, bi-level structures, computationally expensive characteristics, and multiple conflicting objectives.
Recently, various methods for NAS have been introduced to mitigate the above challenges. In terms of optimization, multi/many objective, multimodal, and multi-task optimization approaches have been proposed to solve NAS problems. To improve search efficiency, researchers have designed weight inheritance, performance predictor, and zero-shot approaches, etc. Besides, NAS-based approaches have emerged in large numbers in many practical applications (e.g., point cloud recognition and industrial defect detection). Despite the demonstrated efficacy of existing ENAS methods, there remain unresolved challenges and unexplored directions, including uniform representation, cross-domain prediction, and reliable benchmarks.
Main Topics:
New multi-objective optimization for neural architecture search
Efficient crossover and mutation operator for population generation
Representation strategy for deep network architecture
Weight inheritance with high generalization for neural architecture search
Supernet with low memory overhead for weight inheritance
Data-efficient performance predictor for neural architecture search
Cross-domain performance predictor for neural architecture search
Pareto-wise performance predictor for neural architecture search
Parameter-agnostic zero-shot approach
New zero-shot indicators for neural architecture search
Large-scale search space benchmark
Large-scale optimization algorithms for neural architecture search
Real-world applications of efficient neural architecture search, e.g. image sequences, image analysis, face recognition, natural language processing, named entity recognition, text mining, network security, engineering problems, and financial and business data analysis, etc.