Building Optimal Neural Architectures using Interpretable Knowledge

Neural Architecture Search is a costly practice. The fact that a search space can span a vast number of design choices with each architecture evaluation taking nontrivial overhead makes it hard for an algorithm to sufficiently explore candidate networks. In this paper, we propose AutoBuild, a scheme...

Full description

Saved in:
Bibliographic Details
Main Authors Mills, Keith G, Han, Fred X, Salameh, Mohammad, Lu, Shengyao, Zhou, Chunhua, He, Jiao, Sun, Fengyu, Niu, Di
Format Journal Article
LanguageEnglish
Published 20.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Neural Architecture Search is a costly practice. The fact that a search space can span a vast number of design choices with each architecture evaluation taking nontrivial overhead makes it hard for an algorithm to sufficiently explore candidate networks. In this paper, we propose AutoBuild, a scheme which learns to align the latent embeddings of operations and architecture modules with the ground-truth performance of the architectures they appear in. By doing so, AutoBuild is capable of assigning interpretable importance scores to architecture modules, such as individual operation features and larger macro operation sequences such that high-performance neural networks can be constructed without any need for search. Through experiments performed on state-of-the-art image classification, segmentation, and Stable Diffusion models, we show that by mining a relatively small set of evaluated architectures, AutoBuild can learn to build high-quality architectures directly or help to reduce search space to focus on relevant areas, finding better architectures that outperform both the original labeled ones and ones found by search baselines. Code available at https://github.com/Ascend-Research/AutoBuild
DOI:10.48550/arxiv.2403.13293