Control the number of skip‐connects to improve robustness of the NAS algorithm

Recently, the gradient‐based neural architecture search has made remarkable progress with the characteristics of high efficiency and fast convergence. However, two common problems in the gradient‐based NAS algorithms are found. First, with the increase in the raining time, the NAS algorithm tends to...

Full description

Saved in:
Bibliographic Details
Published inIET computer vision Vol. 15; no. 5; pp. 356 - 365
Main Authors Zhang, Bao Feng, Zhou, Guo Qiang
Format Journal Article
LanguageEnglish
Published Stevenage John Wiley & Sons, Inc 01.08.2021
Wiley
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, the gradient‐based neural architecture search has made remarkable progress with the characteristics of high efficiency and fast convergence. However, two common problems in the gradient‐based NAS algorithms are found. First, with the increase in the raining time, the NAS algorithm tends to skip‐connect operation, leading to performance degradation and instability results. Second, another problem is no reasonable allocation of computing resources on valuable candidate network models. The above two points lead to the difficulty in searching the optimal sub‐network and poor stability. To address them, the trick of pre‐training the super‐net is applied, so that each operation has an equal opportunity to develop its strength, which provides a fair competition condition for the convergence of the architecture parameters. In addition, a skip‐controller is proposed to ensure each sampled sub‐network with an appropriate number of skip‐connects. The experiments were performed on three mainstream datasets CIFAR‐10, CIFAR‐100 and ImageNet, in which the improved method achieves comparable results with higher accuracy and stronger robustness.
ISSN:1751-9632
1751-9640
DOI:10.1049/cvi2.12036