Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation

Given a real-world dataset, data condensation (DC) aims to synthesize a small synthetic dataset that captures the knowledge of a natural dataset while being usable for training models with comparable accuracy. Recent works propose to enhance DC with data parameterization, which condenses data into v...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Zheng, Haizhong, Sun, Jiachen, Wu, Shutong, Kailkhura, Bhavya, Mao, Zhuoqing, Xiao, Chaowei, Prakash, Atul
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 19.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Given a real-world dataset, data condensation (DC) aims to synthesize a small synthetic dataset that captures the knowledge of a natural dataset while being usable for training models with comparable accuracy. Recent works propose to enhance DC with data parameterization, which condenses data into very compact parameterized data containers instead of images. The intuition behind data parameterization is to encode shared features of images to avoid additional storage costs. In this paper, we recognize that images share common features in a hierarchical way due to the inherent hierarchical structure of the classification system, which is overlooked by current data parameterization methods. To better align DC with this hierarchical nature and encourage more efficient information sharing inside data containers, we propose a novel data parameterization architecture, Hierarchical Memory Network (HMN). HMN stores condensed data in a three-tier structure, representing the dataset-level, class-level, and instance-level features. Another helpful property of the hierarchical architecture is that HMN naturally ensures good independence among images despite achieving information sharing. This enables instance-level pruning for HMN to reduce redundant information, thereby further minimizing redundancy and enhancing performance. We evaluate HMN on five public datasets and show that our proposed method outperforms all baselines.
ISSN:2331-8422