Explainable Activity Recognition over Interpretable Models

The majority of the approaches to sensor-based activity recognition are based on supervised machine learning. While these methods reach high recognition rates, a major challenge is to understand the rationale behind the predictions of the classifier. Indeed, those predictions may have a relevant imp...

Full description

Saved in:
Bibliographic Details
Published in2021 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops) pp. 32 - 37
Main Authors Bettini, Claudio, Civitarese, Gabriele, Fiori, Michele
Format Conference Proceeding
LanguageEnglish
Published IEEE 22.03.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The majority of the approaches to sensor-based activity recognition are based on supervised machine learning. While these methods reach high recognition rates, a major challenge is to understand the rationale behind the predictions of the classifier. Indeed, those predictions may have a relevant impact on the follow-up actions taken in a smart living environment. We propose a novel approach for eXplainable Activity Recognition (XAR) based on interpretable machine learning models. We generate explanations by combining the feature values with the feature importance obtained from the underlying trained classifier. A quantitative evaluation on a real dataset of ADLs shows that our method is effective in providing explanations consistent with common knowledge. By comparing two popular ML models, our results also show that one versus one classifiers can provide better explanations in our framework.
DOI:10.1109/PerComWorkshops51409.2021.9430955