C2R: A Novel ANN Architecture for Boosting Indoor Positioning With Scarce Data
Improving the performance of artificial neural network (ANN) regression models on small or scarce data sets, such as wireless network positioning data, can be realized by simplifying the task. One such approach includes implementing the regression model as a classifier, followed by a probabilistic m...
Saved in:
Published in | IEEE internet of things journal Vol. 11; no. 20; pp. 32868 - 32882 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
15.10.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Improving the performance of artificial neural network (ANN) regression models on small or scarce data sets, such as wireless network positioning data, can be realized by simplifying the task. One such approach includes implementing the regression model as a classifier, followed by a probabilistic mapping algorithm that transforms class probabilities into the multidimensional regression output. In this work, we propose the so-called classification-to-regression model (C2R), a novel ANN-based architecture that transforms the classification model into a robust regressor, while enabling end-to-end training. The proposed solution can remove the impact of less likely classes from the probabilistic mapping by implementing a novel, trainable differential thresholded rectified linear unit layer. The proposed solution is introduced and evaluated in the indoor positioning application domain, using 23 real-world, openly available positioning data sets. The proposed C2R model is shown to achieve significant improvements over the numerous benchmark methods in terms of positioning accuracy. Specifically, when averaged across the 23 data sets, the proposed C2R improves the mean positioning error by 7.9% compared to weighted k-nearest neighbors (kNN) with <inline-formula> <tex-math notation="LaTeX">k = 3 </tex-math></inline-formula>, from 5.43 to 5.00m, and by 15.4% compared to a dense neural network (DNN), from 5.91 to 5.00m, while adapting the learned threshold. Finally, the proposed method adds only a single training parameter to the ANN, thus as shown through analytical and empirical means in the article, there is no significant increase in the computational complexity. |
---|---|
ISSN: | 2327-4662 2327-4662 |
DOI: | 10.1109/JIOT.2024.3420122 |