Model-free feature selection to facilitate automatic discovery of divergent subgroups in tabular data

Data-centric AI encourages the need for cleaning, evaluating, and understanding data in order to achieve trustworthy AI. Existing technologies, such as AutoML, make it easier to design and train models automatically, but there is a lack of a similar level of capability to extract data-centric insigh...

Full description

Saved in:
Bibliographic Details
Published in2022 IEEE International Conference on Big Data (Big Data) pp. 6039 - 6047
Main Authors Tadesse, Girmaw Abebe, Ogallo, William, Cintas, Celia, Speakman, Skyler
Format Conference Proceeding
LanguageEnglish
Published IEEE 17.12.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Data-centric AI encourages the need for cleaning, evaluating, and understanding data in order to achieve trustworthy AI. Existing technologies, such as AutoML, make it easier to design and train models automatically, but there is a lack of a similar level of capability to extract data-centric insights. Manual stratification of tabular data per a given feature of interest (e.g., gender) is limited to scaling up for higher feature dimension, which could be addressed using automatic discovery of divergent/anomalous subgroups. Nonetheless, these automatic discovery techniques often search across potentially exponential combinations of features which could be simplified using a preceding feature selection step. Existing feature selection techniques for tabular data often involve fitting a particular model (e.g., XGBoost) in order to select important features. However, such model-based selection is prone to model-bias and spurious correlations in addition to requiring extra resources to design, fine-tune and train a model. In this paper, we propose a model-free and sparsity-based automatic feature selection (SAFS) framework to facilitate automatic discovery of divergent subgroups. Different to filter-based selection techniques, we exploit the sparsity of objective measures among feature values to rank and select features. We validated SAFS across two publicly available datasets (MIMIC-III and Allstate Claims) and compared it with six existing feature selection methods. SAFS achieves a reduction of the feature selection time by a factor of 81× and 104×, averaged cross the existing methods in the MIMIC-III and Claims datasets, respectively. SAFS-selected features are also shown to achieve competitive detection performance, e.g., 18.3% of features selected by SAFS detected similar divergent group compared to using the whole features, in the Claims dataset, with a Jaccard similarity of 0.95 but with a 16× reduction in detection time.
DOI:10.1109/BigData55660.2022.10020842