Exploring stable diffusion and interpretability of image classification using neural features
An ongoing challenge for image processing algorithms is how to go beyond identification, detection, and classification. This idea encompasses the ability to reason about with contextual information. Including what the whole image scene represents and how it informs. The problem of current algorithms...
Saved in:
Main Authors | , |
---|---|
Format | Conference Proceeding |
Language | English |
Published |
SPIE
12.06.2023
|
Online Access | Get full text |
Cover
Loading…
Summary: | An ongoing challenge for image processing algorithms is how to go beyond identification, detection, and classification. This idea encompasses the ability to reason about with contextual information. Including what the whole image scene represents and how it informs. The problem of current algorithms may include approaches that only focus on a single focal plane discarding peripheral elements, limitations of datasets where salient objects are captured close up but not from different angles or distances. Our research explores how to leverage not just core features but features that may have been assumed to be spurious in nature. Thus contextual information increase performance of algorithms: accuracy, interpretability and meaning. In this paper we will present research into causal reasoning and stable diffusion cite{rombach2021highresolution} utilizing synthetic data for detection with core and spurious features. |
---|---|
Bibliography: | Conference Location: Orlando, Florida, United States Conference Date: 2023-04-30|2023-05-05 |
ISBN: | 9781510661929 1510661921 |
ISSN: | 0277-786X |
DOI: | 10.1117/12.2662999 |