The Double Descent Behavior in Two Layer Neural Network for Binary Classification

Recent studies observed a surprising concept on model test error called the double descent phenomenon where the increasing model complexity decreases the test error first and then the error increases and decreases again. To observe this, we work on a two-layer neural network model with a ReLU activa...

Full description

Saved in:
Bibliographic Details
Published inJournal of Data Science Vol. 23; no. 2; pp. 370 - 388
Main Authors Abeykoon, Chathurika S., Beknazaryan, Aleksandr, Sang, Hailin
Format Journal Article
LanguageEnglish
Published 中華資料採礦協會 01.04.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recent studies observed a surprising concept on model test error called the double descent phenomenon where the increasing model complexity decreases the test error first and then the error increases and decreases again. To observe this, we work on a two-layer neural network model with a ReLU activation function designed for binary classification under supervised learning. Our aim is to observe and investigate the mathematical theory behind the double descent behavior of model test error for varying model sizes. We quantify the model size by the ration of number of training samples to the dimension of the model. Due to the complexity of the empirical risk minimization procedure, we use the Convex Gaussian MinMax Theorem to find a suitable candidate for the global training loss.
ISSN:1683-8602
1680-743X
1683-8602
DOI:10.6339/25-JDS1175