Enhancing Data Privacy in Human Factors Studies with Federated Learning

ObjectiveThe objective is to develop a privacy-preserving federated learning framework and evaluate its efficacy for two specific human factors applications: classifying mental stress levels in human-robot collaboration and recognizing human activities during manual material handling.BackgroundMachi...

Full description

Saved in:
Bibliographic Details
Published inHuman factors p. 187208251348025
Main Authors Su, Bingyi, Qing, Liwei, Lu, Lu, Jung, SeHee, Fang, Xiaolei, Xu, Xu
Format Journal Article
LanguageEnglish
Published United States 06.06.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:ObjectiveThe objective is to develop a privacy-preserving federated learning framework and evaluate its efficacy for two specific human factors applications: classifying mental stress levels in human-robot collaboration and recognizing human activities during manual material handling.BackgroundMachine learning, as a transformative tool, has reshaped the landscape of human factors and ergonomics research. Nevertheless, traditional centralized machine learning methods often encounter critical data privacy issues, especially when dealing with sensitive human data. This study addresses these concerns by implementing a federated learning approach.MethodsClassifiers were constructed using both centralized and federated approaches, with machine learning techniques customized for each application. For mental stress classification, we utilized feature-based machine learning techniques, such as support vector machine. For human activity recognition, we deployed a deep neural network combining long short-term memory and convolutional neural network layers. Comparative analysis in terms of precision, recall, and F1-score was conducted to evaluate the performance of the federated and centralized models.ResultsThe results demonstrate that federated learning not only offers comparable accuracy to centralized methods but also ensures the protection of sensitive data. The performance differences were minimal across both applications, with discrepancies remaining under 2.7%.ConclusionFederated learning proves to be a promising alternative to traditional machine learning models, offering comparable accuracy while significantly enhancing data privacy.ApplicationThe study's outcomes are particularly relevant for advancing privacy-preserving methodologies in fields involving sensitive human-subject data.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0018-7208
1547-8181
1547-8181
DOI:10.1177/00187208251348025