Notes on Applicability of Explainable AI Methods to Machine Learning Models Using Features Extracted by Persistent Homology
Data analysis that uses the output of topological data analysis as input for machine learning algorithms has been the subject of extensive research. This approach offers a means of capturing the global structure of data. Persistent homology (PH), a common methodology within the field of TDA, has fou...
Saved in:
Main Author | |
---|---|
Format | Journal Article |
Language | English |
Published |
15.10.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Data analysis that uses the output of topological data analysis as input for
machine learning algorithms has been the subject of extensive research. This
approach offers a means of capturing the global structure of data. Persistent
homology (PH), a common methodology within the field of TDA, has found
wide-ranging applications in machine learning. One of the key reasons for the
success of the PH-ML pipeline lies in the deterministic nature of feature
extraction conducted through PH. The ability to achieve satisfactory levels of
accuracy with relatively simple downstream machine learning models, when
processing these extracted features, underlines the pipeline's superior
interpretability. However, it must be noted that this interpretation has
encountered issues. Specifically, it fails to accurately reflect the feasible
parameter region in the data generation process, and the physical or chemical
constraints that restrict this process. Against this backdrop, we explore the
potential application of explainable AI methodologies to this PH-ML pipeline.
We apply this approach to the specific problem of predicting gas adsorption in
metal-organic frameworks and demonstrate that it can yield suggestive results.
The codes to reproduce our results are available at
https://github.com/naofumihama/xai_ph_ml |
---|---|
DOI: | 10.48550/arxiv.2310.09780 |