Detecting and tracking leukocytes in intravital video microscopy using a Hessian-based spatiotemporal approach

The leukocyte recruitment analysis is an important step to understand the interactions between leukocytes and endothelial cells in the microcirculation of living animals. Performed preferably by the intravital video microscopy technique, this procedure usually requires an expert visual analysis, whi...

Full description

Saved in:
Bibliographic Details
Published inMultidimensional systems and signal processing Vol. 30; no. 2; pp. 815 - 839
Main Authors Gregório da Silva, Bruno C., Carvalho-Tavares, Juliana, Ferrari, Ricardo J.
Format Journal Article
LanguageEnglish
Published New York Springer US 01.04.2019
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The leukocyte recruitment analysis is an important step to understand the interactions between leukocytes and endothelial cells in the microcirculation of living animals. Performed preferably by the intravital video microscopy technique, this procedure usually requires an expert visual analysis, which is prone to the inter- and intra-observer variability. Such problem claims, therefore, an automated method to detect and track these cells. To this end, we developed an approach that combines two different analyses: in the first (2D), all video frames are individually processed by using a blob-like structure detector to find the leukocyte centroids, while in the second (2D + t), a spatiotemporal image (created by stacking all video frames) is processed by a tubular-like structure detector, which is used to determine the leukocyte trajectories over time. For both analyses, the detectors are based on the relationship between Hessian matrix eigenvalues locally obtained from image sequences. Evaluation of the proposed approach was conducted by comparing our technique to the manual annotations using precision, recall and F 1 -score measures in two video sequences. The average results for these measures were, respectively, 0.84, 0.64, and 0.72 for the first video, and 0.84, 0.87, and 0.86 for the second. These results suggested that our proposed approach is comparable with manual annotations performed by the experts and has an excellent potential for use in real circumstances. Moreover, it can reduce the observer variabilities and the burden for visual analysis.
ISSN:0923-6082
1573-0824
DOI:10.1007/s11045-018-0581-5