An eye–hand data fusion framework for pervasive sensing of surgical activities
This paper describes a generic framework for activity recognition based on temporal signals acquired from multiple input modalities and demonstrates its use for eye–hand data fusion. As a part of the data fusion framework, we present a multi-objective Bayesian Framework for Feature Selection with a...
Saved in:
Published in | Pattern recognition Vol. 45; no. 8; pp. 2855 - 2867 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Kidlington
Elsevier Ltd
01.08.2012
Elsevier |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper describes a generic framework for activity recognition based on temporal signals acquired from multiple input modalities and demonstrates its use for eye–hand data fusion. As a part of the data fusion framework, we present a multi-objective Bayesian Framework for Feature Selection with a pruned-tree search algorithm for finding the optimal feature set(s) in a computationally efficient manner. Experiments on endoscopic surgical episode recognition are used to investigate the potential of using eye-tracking for pervasive monitoring of surgical operation and to demonstrate how additional information induced by hand motion can further enhance the recognition accuracy. With the proposed multi-objective BFFS algorithm, suitable feature sets both in terms of feature relevancy and redundancy can be identified with a minimal number of instruments being tracked.
► We propose a generic eye–hand fusion framework for activity recognition. ► We propose a multi-objective BFFS with pruned-tree search algorithm for finding the optimal feature set(s). ► Endoscopic surgical episode recognition experiments are performed with a combined use of eye-tracking and motion sensing. ► Optimal feature sets, in terms of feature relevancy, redundancy and number of instruments being tracked, are identified. ► We validate the framework with surgical episode recognition experiments using various types of classifiers. |
---|---|
Bibliography: | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 23 |
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2012.01.008 |